Posts Tagged 'Strategy'

December 6, 2012

MongoDB: Architectural Best Practices

With the launch of our MongoDB solutions, developers can provision powerful, optimized, horizontally scaling NoSQL database clusters in real-time on bare metal infrastructure in SoftLayer data centers around the world. We worked tirelessly with our friends at 10gen — the creators of MongoDB — to build and tweak hardware and software configurations that enable peak MongoDB performance, and the resulting platform is pretty amazing. As Duke mentioned in his blog post, those efforts followed 10Gen's MongoDB best practices, but what he didn't mention was that we created some architectural best practices of our own for MongoDB in deployments on our platform.

The MongoDB engineered servers that you order from SoftLayer already implement several of the recommendations you'll see below, and I'll note which have been incorporated as we go through them. Given the scope of the topic, it's probably easiest to break down this guide into a few sections to make it a little more digestible. Let's take a look at the architectural best practices of running MongoDB through the phases of the roll-out process: Selecting a deployment strategy to prepare for your MongoDB installation, the installation itself, and the operational considerations of running it in production.

Deployment Strategy

When planning your MongoDB deployment, you should follow Sun Tzu's (modified) advice: "If you know the [friend] and know yourself, you need not fear the result of a hundred battles." "Friend" was substituted for the "enemy" in this advice because the other party is MongoDB. If you aren't familiar with MongoDB, the top of your to-do list should be to read MongoDB's official documentation. That information will give you the background you'll need as you build and use your database. When you feel comfortable with what MongoDB is all about, it's time to "know yourself."

Your most important consideration will be the current and anticipated sizes of your data set. Understanding the volume of data you'll need to accommodate will be the primary driver for your choice of individual physical nodes as well as your sharding plans. Once you've established an expected size of your data set, you need to consider the importance of your data and how tolerant you are of the possibility of lost or lagging data (especially in replicated scenarios). With this information in hand, you can plan and start testing your deployment strategy.

It sounds a little strange to hear that you should test a deployment strategy, but when it comes to big data, you want to make sure your databases start with a strong foundation. You should perform load testing scenarios on a potential deployment strategy to confirm that a given architecture will meet your needs, and there are a few specific areas that you should consider:

Memory Sizing
MongoDB (like many data-oriented applications) works best when the data set can reside in memory. Nothing performs better than a MongoDB instance that does not require disk I/O. Whenever possible, select a platform that has more available RAM than your working data set size. If your data set exceeds the available RAM for a single node, then consider using sharding to increase the amount of available RAM in a cluster to accommodate the larger data set. This will maximize the overall performance of your deployment. If you notice page faults when you put your database under production load, they may indicate that you are exceeding the available RAM in your deployment.

Disk Type
If speed is not your primary concern or if you have a data set that is far larger than any available in memory strategy can support, selecting the proper disk type for your deployment is important. IOPS will be key in selecting your disk type and obviously the higher the IOPS the better the performance of MongoDB. Local disks should be used whenever possible (as network storage can cause high latency and poor performance for your deployment). It's also advised that you use RAID 10 when creating disk arrays.

To give you an idea of what kind of IOPS to expect from a given type of drive, these are the approximate ranges of IOPS per drive in SoftLayer MongoDB engineered servers:

SATA II – 100-200 IOPS
15K SAS – 300-400 IOPS
SSD – 7,000-8,000 IOPS (read) 19,000-20,000 IOPS (write)

CPU
Clock speed and the amount of available processors becomes a consideration if you anticipate using MapReduce. It has also been noted that when running a MongoDB instance with the majority of the data in memory, clock speed can have a major impact on overall performance. If you are planning to use MapReduce or you're able to operate with a majority of your data in memory, consider a deployment strategy that includes a CPU with a high clock/bus speed to maximize your operations per second.

Replication
Replication provides high availability of your data if a node fails in your cluster. It should be standard to replicate with at least three nodes in any MongoDB deployment. The most common configuration for replication with three nodes is a 2x1 deployment — having two primary nodes in a single data center with a backup server in a secondary data center:

MongoDB Replication

Sharding
If you anticipate a large, active data set, you should deploy a sharded MongoDB deployment. Sharding allows you to partition a single data set across multiple nodes. You can allow MongoDB to automatically distribute the data across nodes in the cluster or you may elect to define a shard key and create range-based sharding for that key.

Sharding may also help write performance, so you can also elect to shard even if your data set is small but requires a high amount of updates or inserts. It's important to note that when you deploy a sharded set, MongoDB will require three (and only three) config server instances which are specialized Mongo runtimes to track the current shard configuration. Loss of one of these nodes will cause the cluster to go into a read-only mode (for the configuration only) and will require that all nodes be brought back online before any configuration changes can be made.

Write Safety Mode
There are several write safety modes that govern how MongoDB will handle the persistence of the data to disk. It is important to consider which mode best fits your needs for both data integrity and performance. The following write safety modes are available:

None – This mode provides a deferred writing strategy that is non-blocking. This will allow for high performance, however there is a small opportunity in the case of a node failing that data can be lost. There is also the possibility that data written to one node in a cluster will not be immediately available on all nodes in that cluster for read consistency. The 'None' strategy will also not provide any sort of protection in the case of network failures. That lack of protection makes this mode highly unreliable and should only be used when performance is a priority and data integrity is not a concern.

Normal – This is the default for MongoDB if you do not select any other mode. It provides a deferred writing strategy that is non-blocking. This will allow for high performance, however there is a small opportunity in the case of a node failing that data can be lost. There is also the possibility that data written to one node in a cluster will not be immediately available on all nodes in that cluster for read consistency.

Safe – This mode will block until MongoDB has acknowledged that it has received the write request but will not block until the write is actually performed. This provides a better level of data integrity and will ensure that read consistency is achieved within a cluster.

Journal Safe – Journals provide a recovery option for MongoDB. Using this mode will ensure that the data has been acknowledged and a Journal update has been performed before returning.

Fsync - This mode provides the highest level of data integrity and blocks until a physical write of the data has occurred. This comes with a degradation in performance and should be used only if data integrity is the primary concern for your application.

Testing the Deployment
Once you've determined your deployment strategy, test it with a data set similar to your production data. 10gen has several tools to help you with load testing your deployment, and the console has a tool named 'benchrun' which can execute operations from within a JavaScript test harness. These tools will return operation information as well as latency numbers for each of those operations. If you require more detailed information about the MongoDB instance, consider using the mongostat command or MongoDB Monitoring Service (MMS) to monitor your deployment during the testing.

Installation

When performing the installation of MongoDB, a few considerations can help create both a stable and performance-oriented solution. 10gen recommends the use CentOS (64-bit) as the base operating system if at all possible. If you try installing MongoDB on a 32-bit operating system, you might run into file size limits that cause issues, and if you feel the urge to install it on Windows, you'll see performance issues if virtual memory begins to be utilized by the OS to make up for a lack of RAM in your deployment. As a result, 32-bit operating systems and Windows operating systems should be avoided on MongoDB servers. SoftLayer provisions CentOS 6.X 64-bit operating systems by default on all of our MongoDB engineered server deployments.

When you've got CentOS 64-bit installed, you should also make the following changes to maximize your performance (all of which are included by default on all SoftLayer engineered servers):

Set SSD Read Ahead Defaults to 16 Blocks - SSD drives have excellent seek times allowing for shrinking the Read Ahead to 16 blocks. Spinning disks might require slight buffering so these have been set to 32 blocks.

noatime - Adding the noatime option eliminates the need for the system to make writes to the file system for files which are simply being read — or in other words: Faster file access and less disk wear.

Turn NUMA Off in BIOS - Linux, NUMA and MongoDB tend not to work well together. If you are running MongoDB on NUMA hardware, we recommend turning it off (running with an interleave memory policy). If you don't, problems will manifest in strange ways like massive slow downs for periods of time or high system CPU time.

Set ulimit - We have set the ulimit to 64000 for open files and 32000 for user processes to prevent failures due to a loss of available file handles or user processes.

Use ext4 - We have selected ext4 over ext3. We found ext3 to be very slow in allocating files (or removing them). Additionally, access within large files is poor with ext3.

One last tip on installation: Make the Journal and Data volumes be distinct physical volumes. If the Journal and Data directories reside on a single physical volume, flushes to the Journal will interrupt the access of data and provide spikes of high latency within your MongoDB deployment.

Operations

Once a MongoDB deployment has been promoted to production, there are a few recommendations for monitoring and optimizing performance. You should always have the MMS agent running on all MongoDB instances to help monitor the health and performance of your deployment. Additionally, this tool is also very useful if you have 10gen MongoDB Cloud Subscriptions because it provides useful debugging data for the 10gen team during support interactions. In addition to MMS, you can use the mongostat command (mentioned in the deployment section) to see runtime information about the performance of a MongoDB node. If either of these tools flags performance issues, sharding or indexing are first-line options to resolve them:

Indexes - Indexes should be created for a MongoDB deployment if monitoring tools indicate that field based queries are performing poorly. Always use indexes when you are querying data based on distinct fields to help boost performance.

Sharding - Sharding can be leveraged when the overall performance of the node is suffering because of a large operating data set. Be sure to shard before you get in the red; the system only splits chunks for sharding on insert or update so if you wait too long to shard you may have some uneven distribution for a period of time or forever depending on your data set and sharding key strategy.

I know it seems like we've covered a lot over the course of this blog post, but this list of best practices is far from exhaustive. If you want to learn more, the MongoDB forums are a great resource to connect with the rest of the MongoDB community and learn from their experiences, and the documentation on MongoDB's site is another phenomenal resource. The best people to talk to when it comes to questions about MongoDB are the folks at 10gen, so I also highly recommend taking advantage of MongoDB Cloud Subscriptions to get their direct support for your one-off questions and issues.

-Harold

July 4, 2012

Cedexis: Tech Partner Spotlight

This guest blog features Cedexis, a featured member of the SoftLayer Technology Partners Marketplace. Cedexis a content and application delivery system that offers strategies and solutions for multi-platform content and application delivery to companies focused on maximizing web performance. In this video we talk to Cedexis Co-Founder Julien Coulon.

Company Website: www.cedexis.com
Tech Partners Marketplace: http://www.softlayer.com/marketplace/cedexis

A Multi-Cloud Strategy - The Key to Expansion and Conversion

Web and mobile applications have collapsed geographic barriers to business, bringing brand and commerce experiences ever-closer to increasingly far-flung customers. While web-based business models are powerful enablers for global expansion, they also create new a new challenge in managing availability and performance across diverse and distributed markets: How do you ensure consistent web performance across all markets without investing in physical infrastructure in all of those markets?

Once a business gets its core business on a consistent and reliable provider like SoftLayer, we typically recommend that they consider a multi-cloud strategy that will spread availability and performance risk across a global infrastructure of public and private data centers, delivery networks and cloud providers. Regardless of how fantastic your core SoftLayer hosting is, the reality is that single-source dependency introduces significant business risk. Fortunately, much of that business risk can be mitigated by adding a layer of multi-cloud architecture to support the application.

Recent high-profile outages speak to the problem that multi-sourcing solves, but many web-based operations remain precariously dependent on individual hosting, CDN and cloud providers. It's a lot like having server backups: If you never need a backup that you have, that backup probably isn't worth much to you, but if you need a backup that you don't have, you'd probably pay anything to have it.

A multi-cloud strategy drives revenue and other conversions. Why? Because revenue and conversions online correlate closely with a site's availability and performance. High Scalability posted several big-name real-world examples in the article, "Latency is Everywhere and it Costs You Sales." When an alternative vendor is just one click away, performance often makes a difference measured in dollars.

How Cedexis Can Help

Cedexis was founded to help businesses see and take advantage of a multi-cloud strategy when that strategy can provide better uptime, faster page loads, reliable transactions, and the ability to optimize cost across a diverse network of platforms and providers. We built the Cedexis Radar to measure the comparative performance of major cloud and delivery network providers (demo), and with that data, we created Openmix to provide adaptive automation for cloud infrastructure based on local user demand.

In order to do that effectively, Cedexis was built to be provider-agnostic, community-driven, actionable and adaptive. We support over 100 public cloud providers. We collect performance data based on crowd-sourced user requests (which represent over 900 million measurements per day from 32,000 individual networks). We allow organizations to write custom scripts that automate traffic routing based on fine-grained policies and thresholds. And we go beyond rules-driven traffic routing, dynamically matching actual user requests with the most optimal cloud at a specific moment in time.

Getting Started with Cedexis

  1. Join the Community
    Get real-time visibility into your users' performance.
  2. Compare the Performance of Your Clouds and Devliery Network
    Make informed decisions to optimize your site performance with Radar
  3. Leverage Openmix to optimize global web performance
    Optimize web and mobile performance to serve global markets

The more you can learn about your site, the more you can make it better. We want to help our customers drive revenue, enter new markets, avoid outages and reduce costs. As a SoftLayer customer, you've already found a fantastic hosting provider, and if Openmix won't provide a provable significant change, we won't sell you something you don't need. Our simple goal is to make your life better, whether you're a geek or a suit.

-Julien Coulon, Cedexis

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
April 9, 2012

Scaling SoftLayer

SoftLayer is in the business of helping businesses scale. You need 1,000 cloud computing instances? We'll make sure our system can get them online in 10 minutes. You need to spin up some beefy dedicated servers loaded with dual 8-core Intel Xeon E5-2670 processors and high-capacity SSDs for a new application's I/O-intensive database? We'll get it online anywhere in the world in under four hours. Everywhere you look, you'll see examples of how we help our customers scale, but what you don't hear much about is how our operations team scales our infrastructure to ensure we can accommodate all of our customers' growth.

When we launch a new data center, there's usually a lot of fanfare. When AMS01 and SNG01 came online, we talked about the thousands of servers that are online and ready. We meet huge demand for servers on a daily basis, and that presents us with a challenge: What happens when the inventory of available servers starts dwindling?

Truck Day.

Truck Day not limited to a single day of the year (or even a single day in a given month) ... It's what we call any date our operations team sets for delivery and installation of new hardware. We communicate to all of our teams about the next Truck Day in each location so SLayers from every department can join the operations team in unboxing and preparing servers/racks for installation. The operations team gets more hands to speed up the unloading process, and every employee has an opportunity to get first-hand experience in how our data centers operate.

If you want a refresher course about what happens on a Truck Day, you can reference Sam Fleitman's "Truck Day Operations" blog, and if you want a peek into what it looks like, you can watch Truck Day at SR02.DAL05. I don't mean to make this post all about Truck Day, but Truck Day is instrumental in demonstrating the way SoftLayer scales our own infrastructure.

Let's say we install 1,000 servers to officially launch a new pod. Because each pod has slots for 5,000 servers, we have space/capacity for 3,000-4,000 more servers in the server room, so as soon as more server hardware becomes available, we'll order it and start preparing for our next Truck Day to supplement the pod's inventory. You'd be surprised how quickly 1,000 servers can be ordered, and because it's not very easy to overnight a pallet of servers, we have to take into account lead time and shipping speeds ... To accommodate our customers' growth, we have to stay one step ahead in our own growth.

This morning in a meeting, I saw a pretty phenomenal bullet that got me thinking about this topic:

Truck Day — 4/3 (All Sites): 2,673 Servers

In nine different data center facilities around the world, more than 2,500 servers were delivered, unboxed, racked and brought online. Last week. In one day.

Now I know the operations team wasn't looking for any kind of recognition ... They were just reporting that everything went as planned. Given the fact that an accomplishment like that is "just another day at SoftLayer" for those guys, they definitely deserve recognition for the amazing work they do. We host some of the most popular platforms, games and applications on the Internet, and the DC-Ops team plays a huge role in scaling SoftLayer so our customers can scale themselves.

-@gkdog

February 24, 2012

Kontagent: Tech Partner Spotlight

This is a guest blog featuring Kontagent, one of this month's addition to the SoftLayer Technology Partners Marketplace. Kontagent's kSuite Analytics Platform is a leading enterprise analytics solution for social and mobile application developers. Its powerful dashboard and data science expertise provide organization-wide insights into how customers interact within applications and how to act on that data. Below the video, you'll see an excerpt from a very interesting interview they facilitated with Gaia Online's CEO with fantastic insight into mobile app metrics.

Important Mobile App Metrics to Track

At Kontagent, we've helped hundreds of social customers win by helping them gain better insights into their users' behaviors. We're always improving our already-powerful, best-in-class analytics platform, and we've been leveraging our knowledge and experience to help many of our social customers make a successful transition into the mobile space, too.

Whether you're in the early stages of developing a mobile application, or you've already launched it and have a substantial user base, looking to social app developers for a history lesson on how to do it right can give you a huge head-start, and greater chance at success.

Gaia Online has "done it right" with Monster Galaxy — a hit on both Facebook and iOS. In the first installment of our Kontagent Konnect Executive Interview Series, we spoke with CEO Mike Sego on how the company is applying many of the lessons it learned in moving social-to-mobile, including:

  • The metrics that are most important to succeeding on mobile
  • How to monetize on the F2P model
  • How to successfully split-test on iOS (yes, it is possible!)
  • Other tactics used to keep players engaged and coming back for more

Q: What are the overarching fundamentals for developers who want to make the social to mobile transition? Do these fundamentals also apply to mobile developers in general?
A: Applying the knowledge you gained on Facebook to developing for mobile is the most effective way we've found to succeed in the mobile space.

When it comes to content, the mechanics are almost identical for what motivates user engagement, retention, and monetization between mobile and social. Appointment mechanics, energy mechanics, leaving players wanting more, designing specific goals that are just out of reach until multiple play sessions, etc.—the user experience is consistent.

When it comes to social and mobile game apps, we have found that free-to-play models are the most successful at attracting users. Beyond that, you should focus on a very tight conversion funnel; once a new user has installed your application, analyze every action she takes through the levels or stages of your app. When you start looking at cohorts of users, if there is a spike in drop-offs, you should start asking yourself, 'What is it about this particular stage that could be turning off users? Did I make the level too difficult? Was it not difficult enough? What are some other incentives I can bake into this particular point of the app to get them to keep going?'

But, as you continue to develop your application, keep in mind that you should develop and release quickly, and test often. The trick is to test, fine-tune and iterate with user data. These insights will help you to improve conversion. Spending a disproportionate amount of time instrumenting and scrutinizing the new user experience will pay dividends down the line. This is true for both social and mobile games.

Q: What are the metrics you pay most attention to?
Just as it was in social, the two biggest levers in mobile are still minimizing customer acquisition costs (CAC), and maximizing lifetime value (LTV). The question boils down to this: How can we acquire as many users as possible, for as little money as possible? And, how can we generate as much revenue as possible from those users? Everything else is an input into those two major metrics because those two metrics are what will ultimately determine if you have a scalable hit or a game that just won't pay for itself.

User retention over a longer period of time
Specifically, look at how many users stick around, and how long they stick around, i.e., Day 1, Day 7 retention. (Day 1 retention alone is too broad for you to fully understand what needs to be improved. That's the reason for testing the new user experience.)

Cost to acquire customers
We look at the organic ratio—the number of users who come to us without us having paid for them. This is different from the way we track virality in social since our data for user source isn't as detailed… continued

The full interview goes on a bit longer, and it has profound responses topics we alluded to earlier in the post. We don't want to over-stay our generous welcome here on the SoftLayer blog, so if social and mobile application development are of interest to you, register here (for free) to learn more from the complete interview.

-Catherine Mylinh, Kontagent

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
December 23, 2011

Back up Your Life: In the Clouds, On the Go

The value of our cloud options here at SoftLayer have never been more noticeable than during the holiday seasons. Such a hectic time of the year can cause a lot of stress ... Stress that can lead to human error on some of your most important projects, data and memories. Such a loss could result in weeks or even years of valuable time and memories gone.

In the past few months, I've gone through two major data-related incidents that I was prepared for, and I can't imagine what I would have done if I didn't have some kind of backups in place. In one instance, my backups were not very current, so I ended up losing two weeks worth of work and data, but every now and then, you hear horror stories of people losing (or having to pay a lot to restore) all of their data. The saddest part about the data loss is that it's so easily preventable these days with prevalent backup storage platforms. For example, SoftLayer's CloudLayer Storage is a reliable, inexpensive place to keep all of your valuable data so you're not up a creek if you corrupt/lose your local versions somehow (like dropping a camera, issuing an incorrect syntax command or simply putting a thumb-drive though the washer).

That last "theoretical" example was in fact was one of the "incidents" I dealt with recently. A very important USB thumb-drive that I keep with me at all times was lost to the evil water machine! Because the security of the data was very important to me, I made sure to keep the drive encrypted in case of loss or theft, but the frequency of my backup schedule was the crack in my otherwise well thought data security and redundancy plan. A thumb drive is probably one of the best examples of items that need an automatic system or ritual to ensure data concurrency. This is a device we carry on us at all times, so it sees many changes in data. If this data is not properly updated in a central (secure and redundant) location, then all of our other efforts to take care of that data are wasted.

My the problem with my "Angel" (the name of the now-washed USB drive) was related to concurrency rather than security, and looking back at my mistake, I see how "The Cloud" would have served as a platform to better improve the way I was protecting my data with both of those point in mind. And that's why my new backups-in-the-cloud practices let me sleep a little more soundly these days.

If you're venturing out to fight the crowds of last-minute holiday shoppers or if you're just enjoying the sights and sounds of the season, be sure your memories and keepsake digital property are part of a well designed SRCD (secure, redundant and concurrent data) structure. Here are a few best practices to keep in mind when setting up your system:

  • Create a frequent back-up schedule
  • Use at least two physically separate devices
  • Follow your back-up schedule strictly
  • Automate everything you can for when you forget to execute on the previous bullet*

*I've used a few different programs (both proprietary and non-proprietary) that allow an automatic back-up to be performed when you plug your "on the go" device into your computer.

I'll keep an eye out for iPhone, Android and Blackberry apps that will allow for automatic transfers to a central location, and I'll put together a fresh blog with some ideas when I find anything interesting and worth your attention.

Have a happy Holidays!

- Jonathan

October 10, 2011

A Manifesto: Cloud, Dedicated and Hosting Computing

We are witnessing a fundamental shift in the IT industry. It is forever changing the way technology is delivered and consumed. The pay-as-you-go model for everything you need in IT is shattering the old computing paradigms, from software licensing models and hardware refresh cycles to budgeting operating costs. This change is bringing about more control and transparency to users while accelerating the commoditization of IT by making it easily available through a new model.

This new model comes in three major "flavors": Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) solutions. We incorporate and enable all three by offering a unified, fully automated platform to enable greater customer control over their IT environments. The key tenants of this emerging model for SoftLayer are innovation, empowerment, automation and integration. Here's how we deliver against these four key tenants.

Innovation: We want to lead the industry by offering best of breed and proprietary cloud, dedicated, and managed hosting solutions, based on our own intellectual property. Currently, we have more than 252,000 hours invested and 2.6 million lines of code developed around these solutions. Customers can take charge of every aspect of their IT operations (servers, storage, networking & services) through our fully automated platform. Our Customer Portal and fully featured APIs give customer more control by providing direct access to more than 100 back-end systems and activities — every aspect of IT operations can be managed.

Empowerment: We turn IT operations into a predictable fixed cost. Customers can stay focused on achieving their business goals, not managing IT infrastructure. We offer expert planning and support from a certified, 24/7 support staff. Customers can deploy and scale when they want with one-day and on-demand automated provisioning. They can keep it as long (or short) as needed, with monthly contracts. In addition, customers can choose what they want to manage and what they don't, with the ability to have hybrid IT self-managed and managed environments. This speaks to the flexibility of our platform!

Automation: This is an area that makes SoftLayer stand out from the pack. We automate deployment and management of all services, accelerating provisioning time, streamlining administrative tasks, and making it all on-demand, every day and night. With automation that mitigates the risk for human error, comprehensive security practices and options, and a 24/7 team of certified engineers, we provide greater stability, a 100% Uptime Guarantee, and around the clock support for any issues or service.

Integration: This is the final ingredient to making it ALL work. We seamlessly integrate hardware, software, and networking into a unified service, all conveniently controlled through our easy-to-use Customer Portal and robust APIs. We provide full information, full-time through our Customer Portal and APIs, for every service we provide; there is no data about a system that we keep from our customers, from usage statistics to network performance and beyond. We have complete transparency.

These four key tenets are what set us apart. When SoftLayer started back in 2005, the team's goal was not to be Go Daddy on steroids. We set our sights on being the de facto platform for mainstream businesses to run all their IT operations. This means the complete gamut of applications and workloads with no compromise of performance, security, reliability and access. We are entering into a new IT era, where "connected everything" is the norm. It reminds me of the old phrase "the network is the computer" from Sun Microsystems' slogan. We have the foundation in place, which will make for an unforgettable journey. Let us know what you think.

-@gkdog

October 6, 2011

Raising Funds and Awareness - American Heart

SoftLayer is having a contest between all departments to see who can raise the most money for the American Heart Association. Each department (some departments were combined depending on the amount of employees in the group) was asked to think of a fundraiser, event or just some way the team could raise money for a great cause. Whoever raises the most money wins the grand prize of bragging rights around the office.

The Teams

  • Accounting/Finance
  • Marketing/Strategy
  • Administration/HR/Legal
  • Networking
  • CSA/Managed Services
  • Sales
  • CST
  • SBT/Infrastructure/Implementation
  • Executives (Officers and SVP’ s)
  • Systems – Windows/Linux
  • Facilities
  • Technology
  • Inventory

Most departments have done very well, but given my affiliation with the Marketing team, I want to talk about how amazingly we performed. The Marketing and Strategy team kicked off our fundraising efforts with a BBQ event that consisted of ribs, brisket and potato salad, an auction with some great prizes like Rangers tickets, Calloway Golf polo shirts and FC Dallas Tickets, and T-shirts for sale that read, "DEDICATED and we don’t just mean our servers" sponsored by SuperMicro:

AHA Fundraiser

AHA Fundraiser

And here are a few snapshots from the BBQ Event:

AHA Fundraiser

AHA Fundraiser

AHA Fundraiser

It's pretty clear that 3 Bars BBQ is a pretty big draw in the SoftLayer office.

Needless to say this event was a great success! The Marketing team didn't stop there, though. We had FOUR more auctions ... And we pulled out the big guns (two 600GB SSD hard drives and two 16GB iPad 2s). In my biased opinion, the Marketing team worked the hardest for our donations with sweat and tears ... mainly sweat – you know how hot it is outside in the middle of June in Texas.

To date, our team has raised a little over $7,500 in donations for the American Heart Association. You may say, “Wow that’s a lot of cash!” but one of the coolest ways we were able to raise so much money was that we didn't need to take cash: we got a mobile credit card device so the "I don't have cash on me" excuse was rendered useless! Yeah I know ... we are the smartest team ALIVE! After a few events, every department asked us to use our device for their fundraising efforts.

I am so proud of all the work the Marketing and Strategy teams have put into this fundraiser, and I'm especially proud to be a part of an organization that goes to such lengths to help out a charity.

Go Team SoftLayer!

-Natalie :-)

Categories: 
March 16, 2011

Everything Counts - Social Media Measurement

Here I sit on another flight back to Dallas, and I just finished my movie. What's the best way to spend the rest of the "air time?" Viola - another blog! Your heart is likely aflutter as you wonder what on earth I've come up with to post this time.

After rummaging through the topics bouncing around in my head, I figure it's time for another Social Media blog. I've been tasked with defining the ROI for our social media strategy. Sounds easy, right? You'd be surprised.

Sure, our social media work is well planned out. Our team includes one full time ninja and a few other utility players that span other departments. Our strategy includes all kinds of tactics which we use to let the world (or our corner of it) know about speaking engagements, conferences, new product releases, updated product releases, changes to our website and portal, maintenance windows, outages, etc. (I'd get into more specifics about the tactics, but they are so classified that even I don't know many of them).

So with something so defined and so well thought out, it must be really simple to see if we are #Winning, right? Well not really. Just the other day at the IDC Directions 2011 in Boston @erintraudt, used a great quote from Einstein to explain exactly how difficult it can be to quantify your results: "Everything that can be counted does not necessarily count; everything that counts cannot necessarily be counted." Every good marketing boss would love to be able to say, "We tweet this, we Facebook that, and we get this and that out of it every time," but as you know, it just doesn't work that way.

I will say that after listening to the panels and hearing how the big companies are attacking social media, I think we are years ahead of them in the game. The big ideas they are coming up with are things we tried two years ago, and we already know the pros and cons of those approaches.

I might not be able to hand you a spreadsheet with exactly how many sales and a given social campaign will have on our brand, but we're starting to use a lot of pretty cool tools (some from our customers) to start figuring it all out. Maybe the ninja should be put on the case too.

What do you use to measure social media impact of your campaigns? Do you have a product or service we can check out?

What I can tell you is this: Our first concerted twitter campaign went much better than expected, and while I'm not at liberty to share many details, we think reaching a lot of relevant people who engaged with our content is a distinct measure of success. Even better: We paid less than $2.00 to do so!

I'll take those kinds of results any day of the week and twice on Sunday.

-Skinman

Categories: 
January 27, 2011

What Does it Cost (Part 3)

Determining the value of "On-Demand"
It's 2011, and as we bookend the tail part of 2010 and the beginning of 2011 in effort to close strong and get a good jump on the year, it can be easy to lose sight of the big picture. As it turns out, this distracted me, so it's been a while since I cranked out the previous installment of this "What Does it Cost" series.

As a quick refresher, the idea behind this series came from listening to keynote speakers in conferences this past year who harped on the necessity of getting more value for the same (or even less) budget. In What Does it Cost (Part 1), we discussed how opportunity costs are the most overlooked and important part of planning an infrastructure. In What Does it Cost (Part 2), our focus was on how your people relate to your infrastructure.

The goal of this series is to fairly assign a value to what a company like SoftLayer provides relative to the costs of doing it in-house or by using colocation.

Let's start by making sure we know what 'On-Demand' actually means: 'On-Demand' means that you get what you want when you want it and for the time it is needed. No more and no less. It pretty much takes out all opportunity costs. On–Demand is good, it is necessary, and in the future it will be the difference between successful businesses and ones that are destined to fail.

Everyone has heard the term that "time is money." Receiving server and CCI infrastructure without delay when you want them they should be valuable, right? But just how valuable is that delivery? Have you ever wished you could go back in time and change something about your past? I think we all have. If I went back and took a "risk" in the stock market, knowing what I know now, I'd be writing this blog on a platinum-plated computer. But betting on a game you've already watched isn't really risk.

Infrastructure investments are risky. In some cases, their reward will justify their risk, while in other cases, taking a risk-averse On-Demand approach provides the best outcome all around.

It's tough to assign a once-and-for-all value to On-Demand options because it is not only different for every business, but it is also different for the specific scenario that you are provided. An idea is abstract and meaningless unless we can apply it in some kind of practical application, so perhaps the best way I can illustrate the decision between going an On–Demand route is by looking at it through the lens of two scenarios:

  1. Your project your company will grow by a factor of 10 over the next years.
  2. Your company is doing a hardware refresh for an advancement you want to take advantage of.

Growth
If you have to plan these things out ahead of time, it means making commitments and spending upfront capital and time in order to get what you need in order to grow your infrastructure to meet your projections. There is a lot of uncertainty and risk. What happens if the market takes a sudden downturn and your infrastructure needs to also adapt quickly? Everything works out fine as long as the future goes according to plan, but what if the projections we got were wrong?

Even if we grow 50% (which could still be a huge feat), the fact that we planned for 1000% growth could leave us financially crippled. What if the opposite of this scenario happens and your projections underestimate your future needs? This would seem to be less risky, but in reality, not having the tools necessary to provide your services or support your clients can be even more devastating. Never underestimate the cost of not being able to deliver and keep up with demand.

Hardware Refresh
For a hardware refresh, chances are that there is new software available and new ways of doing things to maximize the potential of recent advances in hardware. Generally, I see that companies that lean away from upgrading to "new" and simply keep the status quo. Why? The risk is too high, and time invested by personnel is far too costly. It may not be appealing to take the risk on finding what will make a new solution work when the cost of that investigation is high and the results are uncertain at best.

The problem is that if you stay unaware of changes in technology you'll soon find yourself getting further and further behind. The thought of maintaining the status quo can be as dangerous as quicksand.

Comparing In-House v. Outsourced
OK, now that the scenarios are set, let's look at what would happen when we play out and in-house infrastructure vs. an outsourced On-Demand solution with SoftLayer.

In a data center environment, the decisions you make have a long term impact. Once you make your decision and spend your money to host in-house, you're fully invested in that decision. The only thing that could break from your plan is a catastrophe. The sensible thing to do is to take time to better educate yourself so you can make better decisions. That means businesses will regularly take months (I've even heard of instances taking more than a year) to devise a strategy and months more to execute the implementation of that strategy. As a result, companies try to plan in multi-year cycles (5 years seems fairly typical).

Think about this: What has happened to your business and in your own personal life in the past 3 months? Do you feel confident in predicting exactly what will happen in the next 5 years?

While SoftLayer might not be in a better position to predict what will happen in the next 5 years, we operate an 'adjustable' infrastructure, so we'll be ready for whatever may come. Instead of an upfront capital expenditure, you can pay monthly to get the newest innovations in server hardware, and you can upgrade at any time without penalty. Instead of signing the long term contracts inherent in running your own data center environment (space, power, bandwidth, software, etc.), you can have hour-by-hour terms, month-to-month at the longest.

Running your own facilities means waiting weeks/months for your hardware to arrive so that you can have it racked and put into production. SoftLayer can build you customized dedicated server configurations that can be provisioned in under four hours. Cloud Compute Instances (CCIs) can be added in minutes, and by using templates you can save even more precious work time. You can even go as far as to automate this by utilizing our API-driven customer portal.

Even if the future doesn't go according to plan when you're using an outsourced On-Demand provider, you will have succeeded in eliminating much of your long-term risk. You can make the necessary adjustments to keep your business in the best position, regardless of what happens.

To give this example some teeth I'll tell you about a customer that I recently assisted: Customer X was looking at adding four fairly stacked servers and a SAN Solution to their infrastructure. To manage that infrastructure in-house, they determined that they would need to add two employees. All in all, this was going to cost them $140K in upfront capital for the hardware, and $120K per year on personnel (if they were lucky). This was all before they could see if their current in-house data center environment could support the additional infrastructure.

As it turns out, the data center environment couldn't sustain the power, and they would be forced to re-up on three year-term contracts for more space, power, and bandwidth to move their existing infrastructure to a larger portion of their data center. This project's costs were getting out of control, but they needed to make a change to deal with business growth. The problem with executing this plan is that at the end of the day, the business growth might not be able to justify the cost of expansion.

The worst part of this is that they were "pot committed" (for any readers that play poker) because of the big-money deal on software licensing they had already executed.

SoftLayer helped them by offering different 'on-demand' ways they could get the job done. As it turned out, they were ordering enough hardware to plan 18 months out and they expected further growth on a longer time line. They were not planning around what their hardware needs were today ... which was really about a third of what they were planning to purchase in the short term. We were able to set up dedicated servers, integrate Cloud Compute Instances for short term spikes in CPU needs and work in a storage solution that could be grown as their needs increased. To top this off, we also developed an High Availability (HA) strategy that put pieces in place where they could easily shift their entire operation into a different data center in a different city, should it ever be necessary. This was an added value that they knew they couldn't come close to executing themselves.

The best part of this example is that even after about a year of service and maintaining a consistent growth pattern, they still have not spent what they would have just in paying the additional two employees they would have had to hire. SoftLayer gave them the means to save $140,000 up front and thousands per month ever since. That customer is planning on moving the rest of their infrastructure into our facilities when their current contracts run out ... They've told me every time we've spoken that they want to make this move immediately, but they are still paying for decisions they made years ago.

At least for them there is light at the end of the tunnel, and they will be truly taking control of their infrastructure.

-Doug

November 19, 2010

What Does it Cost (Part 2)

Your People and How They Relate to Your Infrastructure

If you read my previous blog, “What Does it Cost (Part 1) - The Overview,” you may be interested to delve deeper into the conversation and math behind how all of this adds up. Essentially asking yourself “is it better to build infrastructure yourself?” is a good thing and you will inevitably try to ask yourself what does it cost to do so versus looking into “what would it cost to have SoftLayer do this for me since this is what their core competencies reside in?”.

Remember that one of the big lessons we can learn and that I re-learned at the conferences I attended is that your people are your biggest assets. This lesson is showcased and repeated several times and for good purpose, since this seems to be a time tested rule. While your people are a biggest assets they can also easily be one of your biggest costs especially if they are not managed properly. Every business should have a growth model but one thing that can hold you back is the cost of growth (or your growing pains).

Think about the amount of people you need when you run everything inside and what that will wind up costing. If your business, network, and uptime are all mission critical you’ll also need to take into consideration the number of people needed to make sure a facility is 24*7. You will need someone to fix a drive that brakes and needs to be replaced at 3:42AM, won’t you? Take the number of people that you think you’ll need and now consider what would happen if you were to double in size in a single year (or you could use your own timeline in your head). Would you need double the people or possibly more when you consider the needs of managers to make sure everything was in line with your business strategy? What would the cost be that you would need to pay when considering more than just their salaries.

Think of the other things that do not jump out at you immediately like taxes, insurance, a 401K plan, office space, other liabilities, etc. Gary Kinman (VP of Accounting and Finance) estimates that the cost of each additional employee is about 15-20% more than just the cost of their salary without including things like office space. This is one of the biggest aspects often overlooked, because it not only takes new people you would need to hire, but how it can monopolize time and production you would get otherwise from people you already have on staff.

Now, if you remember from part one I mentioned how Opportunity Costs are some of the biggest costs in the differences between how SoftLayer can help you versus doing things yourself. If you reverse the previous scenario and say that after you’ve just doubled in size there is a bust in the economy which causes you to have to contract. For starters the easiest way to cut back on spending is in people, so you may have to lay people off and ultimately make you the bad guy. Now here is where ugly gets really gruesome.

If you talked yourself into how cheap it can be to buy and do everything yourself you are in a real tight spot because now you may not have the necessary people to run all of your infrastructure, or in an even worse case scenario you may not even need it. What this spells out is that you keep something that cannot be used even though you are paying for it, and you had to let people go just to keep the rest of the boat afloat. Didn’t we say that our people are our most important asset earlier? You can’t always know what kind of worker someone will be when you hire them or how things will work out, but you do want to put yourself in a position to keep the good ones that you trust to push your business forward around and happy.

All right, that is enough doom and gloom scenario. Let’s look at this subject from another angle. As you grow in size generally everything you have and everything you use will grow right along the company. We covered the fact that it will probably become more and more obvious that you’ll need more people to do the work for your business. Hiring systems administrators, DBAs, and development staff can all be good moves that would impact your business specifically; however, are you putting them in the best position for them to be successful? Have you ever seen that show “Undercover Boss”? It seems that in a lot of the episodes you would see that a CEO was not cut out for doing a lot of other jobs in the company and would have a much greater appreciation of everyone who did all of those jobs and how hard they work. Sometimes they would have comments about if they were really trying to get that job they wouldn’t last long. Keep that thought in mind when asking these same Sys-admins, DBAs, and development staff to do jobs that they do not specialize in.

Taking your people in positions where they may get a grade of an “A” or a “B+” and putting them into different positions where they may get a “C-“, “D”, or even an “F” will not likely be good for production levels, decrease levels of morale, and will also likely tank the investment value made in the employees themselves and/or the infrastructure you purchase.

Bottom line is that the way the world is evolving is to work smarter, lessen risk, and (in drawing back to part 1) get more out of having less. The best way to avoid unnecessary risk is to not overextend yourself in the first place, and to stay in a position of flexibility so that you can react and adapt to the market around you. This is what SoftLayer is built for; keeping you with the most options in order to increase your ability to innovate and execute without sacrificing any level of control and without costing large sums of upfront capitol.

I am guessing that about 9 times out of 10 if you take the time to sit down and do the math it all makes perfect sense.

-Doug

Categories: 
Subscribe to strategy