Posts Tagged 'SAN'

November 4, 2015

Shared, scalable, and resilient storage without SAN

Storage area networks (SAN) are used most often in the enterprise world. In many enterprises, you will see racks filled with these large storage arrays. They are mainly used to provide a centralized storage platform with limited scalability. They require special training to operate, are expensive to purchase, support, or expand, and if those devices fail, there is big trouble.

Some people might say SAN devices are a necessary evil. But are they really necessary? Aren’t there alternatives?

Most startups nowadays are running their services on commodity hardware, with smart software to distribute their content across server farms globally. Current, well established, and successful companies that run websites or apps like Whatsapp, Facebook, or LinkedIn continue to operate pretty much the same way they started. They need the ability to scale and perform at unpredictable rates all around the world, so they use commodity hardware combined with smart software. These types of companies need the features that SAN storage offers them—but with more scalable, global resiliency, and without being centralized or having to buy expensive hardware. But how do they provide server access to the same data, and how do they avoid data loss?

The answer is actually quite simple, although its technology is quite sophisticated: distributed storage.

In a world where virtualization has become a standard for most companies, where even applications and networking are being virtualized, virtualization giant VMware answers this question with Virtual SAN. It effectively eliminates the need for SAN hardware in a VMware environment (and it will also be available for purchase from SoftLayer before the end of the year). Other similar distributed products are GlusterFS (also offered in our QuantaStor solution), Ceph, Microsoft Windows DFS, Hadoop HDFS, document-oriented databases like MongoDB, and many more.

Many solutions, however, vary in maturity. Object storage is a great example of a new type of storage that has come to market, which doesn’t require SAN devices. With SoftLayer, you can and may run them all.

When you have bare metal servers set up as hypervisors or application servers, it’s likely you have a lot of drive bays within those servers, mostly unused. Stuffing them with hard drives and allowing the software to distribute your data across multiple servers in multiple locations with two or three replicas will result in a big, safe, fast, and distributed storage platform. For such a platform, scaling it would be just adding more bare metal servers with even more hard drives and letting the software handle the rest.

Nowadays we are seeing more and more hardware solutions like SAN—or even networking—being replaced with smarter software on simpler and more affordable hardware. At SoftLayer, we offer month-to-month and hourly bare metal servers with up to 36 drive bays, potentially providing a lot of room for storage. With 10Gbps global connectivity options, we offer fast, low latency networking for syncing between servers and delivering data to the customer.


November 11, 2014

Which storage solution is best for your project?

Before building applications around our network storage, here’s a refresher on what network storage is, how it is used, the different types available, and the best uses for each.

What is network storage? Why would you use it?

Appropriately named, network storage is storage attached to a server over our network; not to be confused with directly attached storage (DAS), which is a hard drive located in the server (or connected with a device like a SCSI or USB cable). Although DAS transfers data to a server faster than network storage due to network latency and system caching, there is still a strong place for network storage.

Many different servers can access network storage, and with some network storage solutions, more than one server can get data from the same shared storage volume simultaneously. This comes in handy if one server dies, because another can pick up a storage device and start where the first left off.

With DAS, planned downtime for server upgrades, potential data loss, and provisioning larger or more servers can slow down productivity. The physical constraints of internal drives and costs associated with servers do not affect network storage.

Because SoftLayer manages the disk space of our network storage products, there’s no need to worry about rebuilding a redundant array of inexpensive disks (RAIDs) or failed disks. If a disk fails, SoftLayer automatically replaces it and rebuilds the RAID—in most cases you would be unaware that the changes occurred.

Select network storage solutions are available with tools for your important data. Schedule snapshots of your data, promote snapshots to full volumes, or reset your data to the snapshot point.

And with network storage, downtime is minimal. Disaster recovery tools available on select storage solutions let you send a command to quickly fail over to a different data center so you can access your data if our network is ever down in a data center.

Types of Network Storage And How They Are Different

Storage Area Network (SAN) or Block Storage

Block storage works like DAS, just remotely—only a single server can access a block storage volume at a time. Using an Internet small computer system interface (iSCSI) protocol over a secure transmission control protocol/Internet protocol (TCP/IP) connection, SoftLayer's block storage has excellent features for backup and disaster recovery, and adding snapshot schedules and failover redundancy make it a powerful enterprise solution.

Network Attached Storage (NAS) or File Storage

File storage acts like a remote file system. It has a slim operating system that allows servers to treat it like a remote directory structure. Multiple servers can share files on the same storage simultaneously. Our new consistent performance storage lets you share files quickly and easily using a network file system (NFS) with your choice of performance level and secure connections.

We also have a common Internet file system (CIFS) (Windows), which requires a credential that grants access to any server on our private network. File storage can only be accessed by SoftLayer servers.

Object Storage

Object storage is a standalone storage entity with its own representational state transfer (REST) API that grants applications (not operating systems) access to the files stored there. Located on a public network, servers in any of our data centers can directly access files stored there. Object storage is different in the way those files are stored as well. In object storage there is not a directory structure, but instead metadata tags are used to categorize and search for files. In conjunction with a content delivery network (CDN), you can quickly serve files to your users or to a mobile device in close proximity.

With pay-as-you-go pricing, you don’t have to worry about running out of space. We only charge based on the greatest usage in any given day. That means you can get started right now for free!

Which storage solution is best for your project?

If you are still confused about which network storage option you should build your applications around, take this eight-question quiz to find out if object, file or block storage will work best for you:


August 1, 2013

The "Unified Field Theory" of Storage

This guest blog was contributed by William Rocca of OS NEXUS. OS NEXUS makes the Quantastor Software Defined Storage platform designed to tackle the storage challenges facing cloud computing, Big Data and high performance applications.

Over the last decade, the creation and popularization of SAN/NAS systems simplified the management of storage into a single appliance so businesses could efficiently share, secure and manage data centrally. Fast forward about 10 years in storage innovation, and we're now rapidly changing from a world of proprietary hardware sold by big-iron vendors to open-source, scale-out storage technologies from software-only vendors that make use of commodity off-the-shelf hardware. Some of the new technologies are derivatives of traditional SAN/NAS with better scalability while others are completely new. Object storage technologies such as OpenStack SWIFT have created a foundation for whole new types of applications, and big data technologies like MongoDB, Riak and Hadoop go even further to blur the lines between storage and compute. These innovations provide a means for developing next-generation applications that can collect and analyze mountains of data. This is the exciting frontier of open storage today.

This frontier looks a lot like the "Wild West." With ad-hoc solutions that have great utility but are complex to setup and maintain, many users are effectively solving one-off problems, but these solutions are often narrowly defined and specifically designed for a particular application. The question everyone starts asking is, "Can't we just evolve to having one protocol ... one technology that unites them all?"

If each of these data storing technologies have unique advantages for specific use cases or applications, the answer isn't to eliminate protocols. To borrow a well-known concept from Physics, the solution lies in a "Unified Field Theory of Storage" — weaving them together into a cohesive software platform that makes them simple to deploy, maintain and operate.

When you look at the latest generation of storage technologies, you'll notice a common thread: They're all highly-available, scale-out, open-source and serve as a platform for next-generation applications. While SAN/NAS storage is still the bread-and-butter enterprise storage platform today (and will be for some time to come) these older protocols often don't measure up to the needs of applications being developed today. They run into problems storing, processing and gleaning value out of the mountains of data we're all producing.

Thinking about these challenges, how do we make these next-generation open storage technologies easy to manage and turn-key to deploy? What kind of platform could bring them all together? In short, "What does the 'Unified Field Theory of Storage' look like?"

These are the questions we've been trying to answer for the last few years at OS NEXUS, and the result of our efforts is the QuantaStor Software Defined Storage platform. In its first versions, we focused on building a flexible foundation supporting the traditional SAN/NAS protocols but with the launch of QuantaStor v3 this year, we introduced the first scale-out version of QuantaStor and integrated the first next-gen open storage technology, Gluster, into the platform. In June, we launched support of ZFS on Linux (ZoL), and enhanced the platform with a number of advanced enterprise features, such as snapshots, compression, deduplication and end-to-end checksums.

This is just the start, though. In our quest to solve the "Unified Field Theory of Storage," we're turning our eyes to integrating platforms like OpenStack SWIFT and Hadoop in QuantaStor v4 later this year, and as these high-power technologies are streamlined under a single platform, end users will have the ability to select the type(s) of storage that best fit a given application without having to learn (or unlearn) specific technologies.

The "Unified Field Theory of Storage" is emerging, and we hope to make it downloadable. Visit to keep an eye on our progress. If you want to incorporate QuantaStor into your environment, check out SoftLayer's preconfigured QuantaStor Mass Storage Server solution.

-William Rocca, OS NEXUS

October 18, 2011

Adding 'Moore' Storage Solutions

In 1965, Intel co-founder Gordon Moore observed an interesting trend:"The complexity for minimum component costs has increased at a rate of roughly a factor of two per year ... Certainly over the short term this rate can be expected to continue, if not to increase."

Moore was initially noting the number of transistors that can be placed on an integrated circuit at a relatively constant minimal cost. Because that measure has proven so representative of the progress of our technological manufacturing abilities, "Moore's Law" has become a cornerstone in discussions of pricing, capacity and speed of almost anything in the computer realm. You've probably heard the law used generically to refer to the constant improvements in technology: In two years, you can purchase twice as much capacity, speed, bandwidth or any other easily-measureable and relevant technology metric for the price you would pay today and for the current levels of production.

Think back to your first computer. How much storage capacity did it have? You were excited to be counting in bytes and kilobytes ... "Look at all this space!" A few years later, you heard about people at NASA using "gigabytes" of space, and you were dumbfounded. Fastforward a few more years, and you wonder how long your 32GB flash drive will last before you need to upgrade the capacity.

32GB Thumb Drive

As manufacturers have found ways to build bigger and faster drives, users have found ways to fill them up. As a result of this behavior, we generally go from "being able to use" a certain capacity to "needing to use" that capacity. From a hosting provider perspective, we've seen the same trend from our customers ... We'll introduce new high-capacity hard drives, and within weeks, we're getting calls about when we can double it. That's why we're always on the lookout for opportunities to incorporate product offerings that meet and (at least temporarily) exceed our customers' needs.

Today, we announced Quantastor Storage Servers, dedicated mass storage appliances with exceptional cost-effectiveness, control and scalability. Built on SoftLayer Mass Storage dedicated servers with the OS NEXUS QuantaStor Storage Appliance OS, the solution supports up to 48TB of data with the perfect combination of performance economics, scalability and manageability. To give you a frame of reference, this is 48TB worth of hard drives:


If you've been looking for a fantastic, high-capacity storage solution, you should give our QuantaStor offering a spin. The SAN (iSCSI) + NAS (NFS) storage system delivers advanced storage features including, thin-provisioning, and remote-replication. These capabilities make it ideally suited for a broad set of applications including VM application deployments, virtual desktops, as well as web and application servers. From what I've seen, it's at the top of the game right now, and it looks like it's a perfect option for long-term reliability and scalability.


Subscribe to san