Posts Tagged 'Clusters'

December 6, 2012

MongoDB: Architectural Best Practices

With the launch of our MongoDB solutions, developers can provision powerful, optimized, horizontally scaling NoSQL database clusters in real-time on bare metal infrastructure in SoftLayer data centers around the world. We worked tirelessly with our friends at 10gen — the creators of MongoDB — to build and tweak hardware and software configurations that enable peak MongoDB performance, and the resulting platform is pretty amazing. As Duke mentioned in his blog post, those efforts followed 10Gen's MongoDB best practices, but what he didn't mention was that we created some architectural best practices of our own for MongoDB in deployments on our platform.

The MongoDB engineered servers that you order from SoftLayer already implement several of the recommendations you'll see below, and I'll note which have been incorporated as we go through them. Given the scope of the topic, it's probably easiest to break down this guide into a few sections to make it a little more digestible. Let's take a look at the architectural best practices of running MongoDB through the phases of the roll-out process: Selecting a deployment strategy to prepare for your MongoDB installation, the installation itself, and the operational considerations of running it in production.

Deployment Strategy

When planning your MongoDB deployment, you should follow Sun Tzu's (modified) advice: "If you know the [friend] and know yourself, you need not fear the result of a hundred battles." "Friend" was substituted for the "enemy" in this advice because the other party is MongoDB. If you aren't familiar with MongoDB, the top of your to-do list should be to read MongoDB's official documentation. That information will give you the background you'll need as you build and use your database. When you feel comfortable with what MongoDB is all about, it's time to "know yourself."

Your most important consideration will be the current and anticipated sizes of your data set. Understanding the volume of data you'll need to accommodate will be the primary driver for your choice of individual physical nodes as well as your sharding plans. Once you've established an expected size of your data set, you need to consider the importance of your data and how tolerant you are of the possibility of lost or lagging data (especially in replicated scenarios). With this information in hand, you can plan and start testing your deployment strategy.

It sounds a little strange to hear that you should test a deployment strategy, but when it comes to big data, you want to make sure your databases start with a strong foundation. You should perform load testing scenarios on a potential deployment strategy to confirm that a given architecture will meet your needs, and there are a few specific areas that you should consider:

Memory Sizing
MongoDB (like many data-oriented applications) works best when the data set can reside in memory. Nothing performs better than a MongoDB instance that does not require disk I/O. Whenever possible, select a platform that has more available RAM than your working data set size. If your data set exceeds the available RAM for a single node, then consider using sharding to increase the amount of available RAM in a cluster to accommodate the larger data set. This will maximize the overall performance of your deployment. If you notice page faults when you put your database under production load, they may indicate that you are exceeding the available RAM in your deployment.

Disk Type
If speed is not your primary concern or if you have a data set that is far larger than any available in memory strategy can support, selecting the proper disk type for your deployment is important. IOPS will be key in selecting your disk type and obviously the higher the IOPS the better the performance of MongoDB. Local disks should be used whenever possible (as network storage can cause high latency and poor performance for your deployment). It's also advised that you use RAID 10 when creating disk arrays.

To give you an idea of what kind of IOPS to expect from a given type of drive, these are the approximate ranges of IOPS per drive in SoftLayer MongoDB engineered servers:

SATA II – 100-200 IOPS
15K SAS – 300-400 IOPS
SSD – 7,000-8,000 IOPS (read) 19,000-20,000 IOPS (write)

CPU
Clock speed and the amount of available processors becomes a consideration if you anticipate using MapReduce. It has also been noted that when running a MongoDB instance with the majority of the data in memory, clock speed can have a major impact on overall performance. If you are planning to use MapReduce or you're able to operate with a majority of your data in memory, consider a deployment strategy that includes a CPU with a high clock/bus speed to maximize your operations per second.

Replication
Replication provides high availability of your data if a node fails in your cluster. It should be standard to replicate with at least three nodes in any MongoDB deployment. The most common configuration for replication with three nodes is a 2x1 deployment — having two primary nodes in a single data center with a backup server in a secondary data center:

MongoDB Replication

Sharding
If you anticipate a large, active data set, you should deploy a sharded MongoDB deployment. Sharding allows you to partition a single data set across multiple nodes. You can allow MongoDB to automatically distribute the data across nodes in the cluster or you may elect to define a shard key and create range-based sharding for that key.

Sharding may also help write performance, so you can also elect to shard even if your data set is small but requires a high amount of updates or inserts. It's important to note that when you deploy a sharded set, MongoDB will require three (and only three) config server instances which are specialized Mongo runtimes to track the current shard configuration. Loss of one of these nodes will cause the cluster to go into a read-only mode (for the configuration only) and will require that all nodes be brought back online before any configuration changes can be made.

Write Safety Mode
There are several write safety modes that govern how MongoDB will handle the persistence of the data to disk. It is important to consider which mode best fits your needs for both data integrity and performance. The following write safety modes are available:

None – This mode provides a deferred writing strategy that is non-blocking. This will allow for high performance, however there is a small opportunity in the case of a node failing that data can be lost. There is also the possibility that data written to one node in a cluster will not be immediately available on all nodes in that cluster for read consistency. The 'None' strategy will also not provide any sort of protection in the case of network failures. That lack of protection makes this mode highly unreliable and should only be used when performance is a priority and data integrity is not a concern.

Normal – This is the default for MongoDB if you do not select any other mode. It provides a deferred writing strategy that is non-blocking. This will allow for high performance, however there is a small opportunity in the case of a node failing that data can be lost. There is also the possibility that data written to one node in a cluster will not be immediately available on all nodes in that cluster for read consistency.

Safe – This mode will block until MongoDB has acknowledged that it has received the write request but will not block until the write is actually performed. This provides a better level of data integrity and will ensure that read consistency is achieved within a cluster.

Journal Safe – Journals provide a recovery option for MongoDB. Using this mode will ensure that the data has been acknowledged and a Journal update has been performed before returning.

Fsync - This mode provides the highest level of data integrity and blocks until a physical write of the data has occurred. This comes with a degradation in performance and should be used only if data integrity is the primary concern for your application.

Testing the Deployment
Once you've determined your deployment strategy, test it with a data set similar to your production data. 10gen has several tools to help you with load testing your deployment, and the console has a tool named 'benchrun' which can execute operations from within a JavaScript test harness. These tools will return operation information as well as latency numbers for each of those operations. If you require more detailed information about the MongoDB instance, consider using the mongostat command or MongoDB Monitoring Service (MMS) to monitor your deployment during the testing.

Installation

When performing the installation of MongoDB, a few considerations can help create both a stable and performance-oriented solution. 10gen recommends the use CentOS (64-bit) as the base operating system if at all possible. If you try installing MongoDB on a 32-bit operating system, you might run into file size limits that cause issues, and if you feel the urge to install it on Windows, you'll see performance issues if virtual memory begins to be utilized by the OS to make up for a lack of RAM in your deployment. As a result, 32-bit operating systems and Windows operating systems should be avoided on MongoDB servers. SoftLayer provisions CentOS 6.X 64-bit operating systems by default on all of our MongoDB engineered server deployments.

When you've got CentOS 64-bit installed, you should also make the following changes to maximize your performance (all of which are included by default on all SoftLayer engineered servers):

Set SSD Read Ahead Defaults to 16 Blocks - SSD drives have excellent seek times allowing for shrinking the Read Ahead to 16 blocks. Spinning disks might require slight buffering so these have been set to 32 blocks.

noatime - Adding the noatime option eliminates the need for the system to make writes to the file system for files which are simply being read — or in other words: Faster file access and less disk wear.

Turn NUMA Off in BIOS - Linux, NUMA and MongoDB tend not to work well together. If you are running MongoDB on NUMA hardware, we recommend turning it off (running with an interleave memory policy). If you don't, problems will manifest in strange ways like massive slow downs for periods of time or high system CPU time.

Set ulimit - We have set the ulimit to 64000 for open files and 32000 for user processes to prevent failures due to a loss of available file handles or user processes.

Use ext4 - We have selected ext4 over ext3. We found ext3 to be very slow in allocating files (or removing them). Additionally, access within large files is poor with ext3.

One last tip on installation: Make the Journal and Data volumes be distinct physical volumes. If the Journal and Data directories reside on a single physical volume, flushes to the Journal will interrupt the access of data and provide spikes of high latency within your MongoDB deployment.

Operations

Once a MongoDB deployment has been promoted to production, there are a few recommendations for monitoring and optimizing performance. You should always have the MMS agent running on all MongoDB instances to help monitor the health and performance of your deployment. Additionally, this tool is also very useful if you have 10gen MongoDB Cloud Subscriptions because it provides useful debugging data for the 10gen team during support interactions. In addition to MMS, you can use the mongostat command (mentioned in the deployment section) to see runtime information about the performance of a MongoDB node. If either of these tools flags performance issues, sharding or indexing are first-line options to resolve them:

Indexes - Indexes should be created for a MongoDB deployment if monitoring tools indicate that field based queries are performing poorly. Always use indexes when you are querying data based on distinct fields to help boost performance.

Sharding - Sharding can be leveraged when the overall performance of the node is suffering because of a large operating data set. Be sure to shard before you get in the red; the system only splits chunks for sharding on insert or update so if you wait too long to shard you may have some uneven distribution for a period of time or forever depending on your data set and sharding key strategy.

I know it seems like we've covered a lot over the course of this blog post, but this list of best practices is far from exhaustive. If you want to learn more, the MongoDB forums are a great resource to connect with the rest of the MongoDB community and learn from their experiences, and the documentation on MongoDB's site is another phenomenal resource. The best people to talk to when it comes to questions about MongoDB are the folks at 10gen, so I also highly recommend taking advantage of MongoDB Cloud Subscriptions to get their direct support for your one-off questions and issues.

-Harold

November 11, 2009

Viva Las Vegas!

I just got back in town from Las Vegas, Nevada. That town is filled with stories and you can really love it or hate it, depending on the hour (or if you are like me whether you are arriving into McCarran or departing). I had a great trip this last go around and actually made money on the tables. However, when they say that what happens in Vegas stays in Vegas they are really talking about your money. Never forget that the house always wins. Always. Even if you win money you’ll wind up spending it on stuff out there and perpetuating your own good time. There isn’t anything wrong with this at all. In fact I plan on coming up on the short side of the stick on both the tables and on simply spending cash when I go out that way.

I think the really interesting thing that happens when you go through “the Vegas experience” is the perceived value of a dollar. You can take it for granted that all of a sudden you are transplanted into this fantasy world that is reminiscent of Pleasure Island from the story of Pinocchio and you’ll find that you have anything and everything you could want to do, eat, drink, or experience right at your fingertips. As this begins to progress the value of a dollar plummets quickly. You start overpaying for things at a whim, tipping bigger, making bolder and even just dumber bets. I did this and I can admit that I doubled down on my 11 when the dealer was showing a 10 in blackjack. It was blind luck that I hit it and won every single time. It’s a bold and stupid bet to make, but when you are playing with house money the money doesn’t matter and it’s almost as if you are trying to give it all back. My game of choice is craps because it gives you the best odds and there is a lot of action. It’s good and bad as it can all come and go in a hurry.

I have only been to Las Vegas a handful of times, but each time there is a point where even for a second you can feel invincible – that you can’t lose. Or, that even if you do lose you won’t even care. The flight home is a completely different story. I call it the hangover flight. You may be literally hung over, but no matter what, you will start to deal with all of the actions that happened on your trip and how you will need to handle them. As soon as you touch down in your own home town things slowly start to become “real” again. Your own home can even feel somewhat foreign for a while, but you’ll quickly come to the realization that you had become a completely different person for a short time.

I have come to the conclusion that there is always risk in everything that we do. Exposing yourself to the tables of Las Vegas may carry more financial risk than your morning commute to work, but in both cases there are still risks. There are also risks that we take in setting and running a business. There are countless ways that you could be putting your business at risk without the right plan in place. From an IT perspective alone, you need to consider things like redundancy, failover, security, backups, growth, and even data loss. Knowing what is going to happen next for your business may be as likely as knowing what is going to come up on the next roll of the dice. If you know this for certain you can press your luck and come up big, but if you are not prepared you could lose everything you have on the table. It is better to be prepared.

I think of SoftLayer as the house, and remember as I said before, the house always wins. The good thing about this is that you are betting with the house. Even with this you need to bet on yourself and back up your own bet. If the bulk of your business is in your data then you need to have backups. If you absolutely need to have High Availability, then look into Clusters and Load Balancing. But remember, that you are betting with the house because SoftLayer gives you the capacity to do all of it and do it all at a very affordable price compared to trying to do it yourself and also do it without long term commitments. Long term commitments bring the most uncertainty in making moves that will positively affect your business. Imagine if a casino told you that you “had” to make 12 consecutive bets regardless of how well (or poorly) you were doing?

Coming home from Las Vegas to SoftLayer has been a very good thing and makes me thankful for where I am and what I have. There aren’t the levels of uncertainty here that are automatic with other datacenters or even other business models. SoftLayer is steady and it is very easy to get what you need here while cutting out the risk that you don’t want to deal with. SoftLayer is as much of a “sure thing” as any bet you can make!

May 1, 2009

What A Cluster

When you think about all the things that have to go right all the time where all the time is millions of times per second for a user to get your content it can be a little... daunting. The software, the network, the hardware all have to work for this bit of magic we call the Internet to actually occur.

There are points of failure all over the place. Take a server for example: hard drives can fail, power supplies can fail, the OS could fail. The people running servers can fail.. maybe you try something new and it has unforeseen consequences. This is simply the way of things.

Mitigation comes in many forms. If your content is mostly images you could use something like a content delivery network to move your content into the "cloud" so that failure in one area might not take out everything. On the server itself you can do things like redundant power supplies and RAID arrays. Proper testing and staging of changes can help minimize the occurrence of software bugs and configuration errors impacting your production setup.

Even if nothing fails there will come a time when you have to shut down a service or reboot an entire server. Patches can't always update files that are in use, for example. One way to work around this problem is to have multiple servers working together in a server cluster. Clustering can be done in various ways, using Unix machines, Windows machines and even a combination of operating systems.

Since I've recently setup a Windows 2008 cluster that is we're going to discuss. First we need to discuss some terms. A node is a member of a cluster. Nodes are used to host resources, which are things that a cluster provides. When a node in a cluster fails another node takes over the job of offering that resource to the network. This can be done because resources (files, IPs, etc) are stored on the network using shared storage, which is typically a set of SAN drives to which multiple machines can connect.

Windows clusters come in a couple of conceptual forms. Active/Passive clusters have the resources hosted on one node and have another node just sitting idle waiting for the first to fail. Active/Active clusters on the other hand host some resources on each node. This puts each node to work. The key with clusters is that you need to size the nodes such that your workloads can still function even if there is node failure.

Ok, so you have multiple machines, a SAN between them, some IPs and something you wish to serve up in a highly available manner. How does this work? Once you create the cluster you then go about defining resources. In the case of the cluster I set up my resource was a file share. I wanted these files to be available on the network even if I had to reboot one of the servers. The resource was actually combination of an IP address that could be answered by either machine and the iSCSI drive mount which contained the actual files.

Once the resource was established it was hosted on NodeA. When I rebooted NodeA though the resource was automatically failed over to NodeB so that the total interruption in service was only a couple of seconds. NodeB took possession of the IP address and the iSCSI mount automatically once it determined that NodeA had gone away.

File serving is a really basic example but you can clustering with much more complicated things like the Microsoft Exchange e-mail server, Internet Information Server, Virtual Machines and even network services like DHCP/DNS/WINs.

Clusters are not the end of service failures. The shared storage can fail, the network can fail, the software configuration or the humans could fail. With a proper technical staff implementing and maintaining them, however, clusters can be a useful tool in the quest for high availability.

Subscribe to clusters