news

November 19, 2013

Protect Your Data: Configure EVault for Server Backups

In "The Tenth Anniversary" episode of "Everybody Loves Raymond," Raymond accidentally records the Super Bowl over his wedding video. He hilariously tries to compensate for his gaffe by renewing his wedding vows so he can make a new tape for his wife Debra. If life imitates art, it's worth considering what would happen if that tape held your business data. It would be disaster!

While it's unlikely that one of your sysadmins will accidentally record the Super Bowl over the data in your database server cluster, data loss can occur in a number of ways. If your business data is not protected and backed up, it's unlikely that you'll have a neat and tidy sitcom episode resolution. Luckily, SoftLayer provides simple, inexpensive backup capabilities with software such as EVault, so you shouldn't ever be worried about anyone pulling a Raymond on your data.

The following quick, four-step process walks you through how to protect and back up your data by subscribing to SoftLayer's EVault Backup client. This software enables you to design and set your backup schedule, protecting your business from unexpected costs because of accidental deletions, viruses, and other disasters. To follow along on your own servers, your computing instances or bare metal servers need to be provisioned, and you need to have root or administrator level access to those servers. For the sake of brevity, I'll be using a Linux operating system in this guide, but if you're running Windows, the process, in general, is no different.

Step 1 - Order EVault Backup for the server or computing instance

  1. Log into the SoftLayer Customer Portal and select the server(s) that needs storage services from the device list.
  2. Scroll down to the Storage section. Select the Add (or Modify) link located on the right hand corner of the EVault record to place an order for an EVault Backup client subscription.
  3. On the EVault ordering screen, select either Local or Remote Data Center and the desired amount of storage. Agree to the terms and conditions and click the Order EVault button to place your EVault storage order.
  4. The order is typically provisioned in 5 minutes or less and the system creates a user and password for the new instance of EVault.
  5. Click Services→Storage→EVault and expand the EVAULT link to make note of the user credentials, which will be used in Step 3.

Step 2 - Download the EVault agent on the server or computing instance

  1. SSH into the server or computing instance and run the following command:
    # wget –N http://downloads.service.softlayer.com/evault/evault_manual.sh

Step 3 - Register the server or computing instance with EVault in order to run back up and restore jobs

  1. From the command prompt on the server or compute instance run the following command to register it with EVault:
    ~]# sh ./evault_manual.sh
  2. In the ensuing prompts, enter the credentials that were noted Step 1.5 and use ev-webcc01.service.softlayer.com for the web-based agent console address.

    Note: In the event the agent fails to register with EVault, you can quickly register the agent manually by running ~]#<Installation directory>/register

Once you've made it to this point, you're ready to run backup and restore jobs.

Step 4 – Login into EVault console with WebCCLogin

  1. From the SoftLayer Customer Portal, click Services→Storage→EVault.
  2. Expand the server or compute instance to which EVault Backup is attached. In the right-hand corner of the server entry you will find a link to WebCCLogin.
  3. Click the WebCCLogin link for the EVault Web CentralControl screen. Type in the credentials from Step 1.5 and you’ll be taken to the EVault Backup and Restore interface.
  4. You are now ready to run your backup and restore jobs!

Check your backups often to confirm that they're being created when, where, and how you want them to be created. To prepare for any possible disaster recovery scenarios, schedule periodic tests of your backups: Restore the most recent backup of your production server to an internal server. That way, if someone pulls a Raymond on your server(s), you'll be able to get all of your data back online quickly. If you're interested in learning more, visit the Evault Backup page on KnowledgeLayer.

-Vinayak Harnoor

Vinayak Harnoor is a Technical Architect with the IBM Global Technology Services (GTS) Global Cloud Ecosystem team.

November 14, 2013

Enhancing Usability by Building User Confidence

Consider your experiences with web applications, and see if this scenario seems familiar: Your electricity bill has some incorrect charges on it. Fearing that you will have to spend 40 minutes on hold if you call in, you find that the electric company website has a support center where you can submit billing issues and questions; you are saved! You carefully fill out the form with your sixteen-digit account number and detailed description of the incorrect charges. You read it over and click the submit button. Your page goes blank for a couple of seconds, the form comes back with a note saying you typed in your phone number incorrectly, and the detailed description you spent eleven minutes meticulously writing is gone.

Web applications have gotten much better at preventing these kinds of user experiences over the past few years, and I'm sure that none of your applications have this problem (if they do, fix it right now!), but "usability" is more than just handling errors gracefully. Having a seamless process is only half the battle when it comes to giving your users a great experience with your application. The other half of the battle is a much more subjective: Your users need to feel confident in their success every step of the way. By keeping a few general guidelines in mind, you can instill confidence in your users so that they feel positive about your application from start to finish with whatever they are trying to accomplish.

1. Keep the user in a familiar context.

As the user in our electric company support application example, let's assume the process works and does not lose any of my information. I have to have faith that the application is going to do what I expect it to do when the page refreshes. Faith and unfamiliar technology do not exactly go hand in hand. Instead of having the form submit with a page refresh, the site's developers could introduce a progress wheel or other another kind of indicator that shows the data is being submitted while the content is still visible. If detailed content never goes away during the submission process, I'm confident that I still have access to my information.

Another example of the same principle is the use of modal windows. Modal windows are presented on top of a previous page, so users have a clear way of going back if they get confused or decide they navigated to the wrong place. By providing this new content on top of a familiar page, users are much less likely to feel disoriented if they get stuck or lost, and they will feel more confident when they're using the application.

2. Reassure the user with immediate feedback.

By communicating frequently and clearly, users are reassured, and they are much less likely to become anxious. Users want to see their actions get a response from your application. In our electric company support application example, imagine how much better the experience would be if a small blurb was displayed in red next to the phone number text box when I typed in my phone number in the wrong format. The immediate feedback would pinpoint the problem when it is easy to correct, and it would make me confident that when the phone number is updated, the application will continue to work as expected.

3. Provide warnings or extra information for dangerous or complicated operations.

When users are new to an application, they are not always sure which actions will have negative consequences. This is another great opportunity for communication. Providing notices or alerts for important or risky operations can offer a good dose of hesitation for new users who aren't prepared. Effective warnings or notices will tell the user when they will want to perform this action or what the negative consequences might be, so the user can make an informed decision. Users are confident with informed decisions because a lack information causes anxiety.

I learned how to implement this tip when I designed a wizard system for a previous employer that standardized how the company's application would walked users through any step-by-step process. My team decided early on to standardize a review step at the end of any implemented wizard. This was an extra step that every user had to go through for every wizard in the application, but it made all of the related processes much more usable and communicative. This extra information gave the users a chance to see the totality of the operation they were performing, and it gave them a chance to correct any mistakes. Implementing this tip resulted in users who were fully informed and confident throught the process of very complicated operations.

4. Do not assume your users know your terminology, and don't expect them to learn it.

Every organization has its own language. I have never encountered an exception to this rule. It cannot be helped! Inside your organization, you come up with a defined vocabulary for referencing the topics you have to work with every day, but your users won't necessarily understand the terminology you use internally. Some of your ardent users pick up on your language through osmosis, but the vast majority of users just get confused when they encounter terms they are not familiar with.

When interacting with users, refrain from using any of your internal language, and strictly adhere to a universally-accepted vocabulary. In many cases, you need shorthand to describe complex concepts that users will already understand. In this situation, always use universal or industry-wide vocabulary if it is available.

This practice can be challenging and will often require extra work. Let's say you have a page in your application dealing with "display devices," which could either be TVs or monitors. All of your employees talk about display devices because to your organization, they are essentially the same thing. The technology of your application handles all display devices in exactly the same way, so as good software designers you have this abstracted (or condensed for non-technical people) so that you have the least amount of code possible. The easiest route is to just have a page that talks about display devices. The challenge with that approach is that your users understand what monitors and TVs are, but they don't necessarily think of those as display devices.

If that's the case, you should use the words "monitors" and "TVs" when you're talking about display devices externally. This can be difficult, and it requires a lot of discipline, but when you provide familiar terminology, users won't be disoriented by basic terms. To make users more comfortable, speak to them in their language. Don't expect them to learn yours, because most of them won't.

When you look at usability through the subjective lens of user confidence, you'll find opportunities to enhance your user experience ... even when you aren't necessarily fixing anything that's broken. While it's difficult to quantify, confidence is at the heart of what makes people like or dislike any product or tool. Pay careful attention to the level of confidence your users have throughout your application, and your application can reach new heights.

-Tony

November 11, 2013

Sysadmin Tips and Tricks - Using the ‘for’ Loop in Bash

Ever have a bunch of files to rename or a large set of files to move to different directories? Ever find yourself copy/pasting nearly identical commands a few hundred times to get a job done? A system administrator's life is full of tedious tasks that can be eliminated or simplified with the proper tools. That's right ... Those tedious tasks don't have to be executed manually! I'd like to introduce you to one of the simplest tools to automate time-consuming repetitive processes in Bash — the for loop.

Whether you have been programming for a few weeks or a few decades, you should be able to quickly pick up on how the for loop works and what it can do for you. To get started, let's take a look at a few simple examples of what the for loop looks like. For these exercises, it's always best to use a temporary directory while you're learning and practicing for loops. The command is very powerful, and we wouldn't want you to damage your system while you're still learning.

Here is our temporary directory:

rasto@lmlatham:~/temp$ ls -la
total 8
drwxr-xr-x 2 rasto rasto 4096 Oct 23 15:54 .
drwxr-xr-x 34 rasto rasto 4096 Oct 23 16:00 ..
rasto@lmlatham:~/temp$

We want to fill the directory with files, so let's use the for loop:

rasto@lmlatham:~/temp$ for cats_are_cool in {a..z}; do touch $cats_are_cool; done;
rasto@lmlatham:~/temp$

Note: This should be typed all in one line.

Here's the result:

rasto@lmlatham:~/temp$ ls -l
total 0
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 a
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 b
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 c
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 d
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 e
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 f
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 g
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 h
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 i
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 j
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 k
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 l
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 m
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 n
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 o
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 p
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 q
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 r
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 s
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 t
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 u
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 v
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 w
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 x
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 y
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 z
rasto@lmlatham:~/temp$

How did that simple command populate the directory with all of the letters in the alphabet? Let's break it down.

for cats_are_cool in {a..z}

The for is the command we are running, which is built into the Bash shell. cats_are_cool is a variable we are declaring. The specific name of the variable can be whatever you want it to be. Traditionally people often use f, but the variable we're using is a little more fun. Hereafter, our variable will be referred to as $cats_are_cool (or $f if you used the more boring "f" variable). Aside: You may be familiar with declaring a variable without the $ sign, and then using the $sign to invoke it when declaring environment variables.

When our command is executed, the variable we declared in {a..z}, will assume each of the values of a to z. Next, we use the semicolon to indicate we are done with the first phase of our for loop. The next part starts with do, which say for each of a–z, do <some thing>. In this case, we are creating files by touching them via touch $cats_are_cool. The first time through the loop, the command creates a, the second time through b and so forth. We complete that command with a semicolon, then we declare we are finished with the loop with "done".

This might be a great time to experiment with the command above, making small changes, if you wish. Let's do a little more. I just realized that I made a mistake. I meant to give the files a .txt extension. This is how we'd make that happen:

for dogs_are_ok_too in {a..z}; do mv $dogs_are_ok_too $dogs_are_ok_too.txt; done;
Note: It would be perfectly okay to re-use $cats_are_cool here. The variables are not persistent between executions.

As you can see, I updated the command so that a would be renamed a.txt, b would be renamed b.txt and so forth. Why would I want to do that manually, 26 times? If we check our directory, we see that everything was completed in that single command:

rasto@lmlatham:~/temp$ ls -l
total 0
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 a.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 b.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 c.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 d.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 e.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 f.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 g.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 h.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 i.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 j.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 k.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 l.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 m.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 n.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 o.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 p.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 q.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 r.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 s.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 t.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 u.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 v.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 w.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 x.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 y.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 z.txt
rasto@lmlatham:~/temp$

Now we have files, but we don't want them to be empty. Let's put some text in them:

for f in `ls`; do cat /etc/passwd > $f; done

Note the backticks around ls. In Bash, backticks mean, "execute this and return the results," so it's like you executed ls and fed the results to the for loop! Next, cat /etc/passwd is redirecting the results to $f, in filenames a.txt, b.txt, etc. Still with me?

So now I've got a bunch of files with copies of /etc/passwd in them. What if I never wanted files for a, g, or h? First, I'd get a list of just the files I want to get rid of:

rasto@lmlatham:~/temp$ ls | egrep 'a|g|h'
a.txt
g.txt
h.txt

Then I could plug that command into the for loop (using backticks again) and do the removal of those files:

for f in `ls | egrep 'a|g|h'`; do rm $f; done

I know these examples don't seem very complex, but they give you a great first-look at the kind of functionality made possible by the for loop in Bash. Give it a whirl. Once you start smartly incorporating it in your day-to-day operations, you'll save yourself massive amounts of time ... Especially when you come across thousands or tens of thousands of very similar tasks.

Don't do work a computer should do!

-Lee

November 1, 2013

Paving the Way for the DevOps Revolution

The traditional approach to software development has been very linear: Your development team codes a release and sends it over to a team of quality engineers to be tested. When everything looks good, the code gets passed over to IT operations to be released into production. Each of these teams operates within its own silo and makes changes independent of the other groups, and at any point in the process, it's possible a release can get kicked back to the starting line. With the meteoric rise of agile development — a development philosophy geared toward iterative and incremental code releases — that old waterfall-type development approach is being abandoned in favor of a DevOps approach.

DevOps — a fully integrated development and operations approach — streamlines the software development process in an agile development environment by consolidating development, testing and release responsibilities into one cohesive team. This way, ideas, features and other developments can be released very quickly and iteratively to respond to changing and growing market needs, avoiding the delays of long, drawn-out and timed dev releases.

To help you visualize the difference between the traditional approach and the DevOps approach, take a look at these two pictures:

Traditional Waterfall Development
SoftLayer DevOps Blog

DevOps
SoftLayer DevOps Blog

Unfortunately, many businesses struggle to adopt the DevOps approach because they simply update their org chart by merging their traditional teams, but their development philosophy doesn't change at the same time. As a result, I've encountered a lot of companies who have been jaded by previous attempts to move to a DevOps model, and I'm not alone. There is a significant need in the marketplace for some good old fashioned DevOps expertise.

A couple months ago, my friend Raj Bhargava pinged me with a phenomenal idea to put on a DevOps "un-conference" in Boulder, Colorado, to address the obvious need he's observed for DevOps education and best practices. Raj is a serial, multiple-exit entrepreneur from Boulder, and he is the co-founder and CEO of a DevOps-focused startup there called JumpCloud. When he asked if I would like to co-chair the event and have SoftLayer as a headline sponsor alongside JumpCloud, the answer was a quick and easy "Yes!"

Sure, there have been other DevOps-related conferences around the world, but ours was designed to be different from the outset. As strange as it may sound, half of the conference intentionally occurred outside of the conference: One of our highest priorities was to strike up conversations between the participants before, during and after the event. If we're putting on a conference to encourage a collaborative development approach, it would be counterproductive for us to use a top-down, linear approach to engaging the attendees, right?

I'm happy to report that this inaugural attempt of our untested concept was an amazing success. We kept the event private for our first run at the concept, but the event was bursting at the seams with brilliant developers and tech influencers. Brad Feld and our friends from the Foundry Group invited all of their portfolio CEO's and CTO's. David Cohen, co-founder of Techstars and head honcho at Bullet Time Ventures did the same. JumpCloud and SoftLayer helped round out the attendee list with a few of our most innovative partners as well as a few of technologists from within our own organizations. It was an incredible mix of super-smart tech pros, business leaders and VC's from all over the world.

With such a diverse group of attendees, the conversations at the event were engaging, energizing and profound. We discussed everything from how startups should incorporate automation into their business plans at the outset to how the practice of DevOps evolves as companies scale quickly. At the end of the day, we brought all of those theoretical discussions back down to the ground by sharing case studies of real companies that have had unbelievable success in incorporating DevOps into their businesses. I had the honor of wrapping up the event as moderator of a panel with Jon Prall from Sendgrid, Scott Engstrom from Gnip and Richard Miller of Mocavo, and I couldn't have been happier with the response.

I'd like to send a big thanks to everyone who participated, especially our cosponsors — JumpCloud, VictorOps, Authentic8, DH Capital, SendGrid, Cooley, Pivot Desk, SVP and Pantheon.

I'm looking forward to opening this up to the world next year!

-@PaulFord

October 24, 2013

Why Hybrid? Why Now?

As off-premise cloud computing adoption continues to grow in a non-linear fashion, a growing number of businesses running in-house IT environments are debating whether they should get on board as well. If you've been part of any of those conversations, you've tried to balance the hype with the most significant questions for your business: "How do we know if our company is ready to try cloud resources? And if we're ready, how do we actually get started?"

Your company is cloud-ready as soon as you understand and accept the ramifications of remote resources and scaling in the cloud model, and it doesn't have to be an "all-in" decision. If you need certain pieces of your infrastructure to reside in-house, you can start evaluating the cloud with workloads that don't have to be hosted internally. The traditional IT term for this approach is "hybrid," but that term might cause confusion these days.

In the simplest sense, a hybrid model is one in which a workload is handled by one or more non-heterogeneous elements. In the traditional IT sense, those non-heterogeneous elements are two distinct operating environments (on-prem and off-prem). In SoftLayer's world, a hybrid environment leverages different heterogeneous elements: Bare metal and virtual server instances, delivered in the cloud.

Figure 1: Traditional Hybrid - On-Premise to Cloud (Through VPN, SSL or Open Communications)

Traditional Hybrid

Figure 2: SoftLayer's Hybrid - Dedicated + Virtual

SoftLayer Hybrid

Because SoftLayer's "hybrid" and traditional IT's "hybrid" are so different, it's easy to understand the confusion in the marketplace: If a hybrid environment is generally understood to involve the connection of on-premise infrastructure to cloud resources, SoftLayer's definition seems contrarian. Actually, the use of the term is a lot more similar than I expected. In a traditional hosting environment, most businesses think in terms of bare metal (dedicated) servers, and when those businesses move "to the cloud," they're generally thinking in terms of virtualized server instances. So SoftLayer's definition of a hybrid environment is very consistent with the market definition ... It's just all hosted off-premise.

The ability to have dedicated resources intermixed with virtual resources means that workloads from on-premise hypervisors that require native or near-native performance can be moved immediately. And because those workloads don't have to be powered by in-house servers, a company's IT infrastructure moves a CapEx to an OpEx model. In the past, adopting infrastructure as a service (IaaS) involved shoehorning workloads into whichever virtual resource closest matched an existing environment, but those days are gone. Now, on-premise resources can be replicated (and upgraded) on demand in a single off-premise environment, leveraging a mix of virtual and dedicated resources.

SoftLayer's environment simplifies the process for businesses looking to move IT infrastructure off-premise. Those businesses can start by leveraging virtual server instances in a cloud environment while maintaining the in-house resources for certain workloads, and when those in-house resources reach the end of their usable life (or need an upgrade), the businesses can shift those workloads onto bare metal servers in the same cloud environment as their virtual server instances.

The real-world applications are pretty obvious: Your company is considering moving part of a workload to cloud in order to handle peak season loads at the end of the year. You've contemplated transitioning parts of your environment to the cloud, but you've convinced yourself that shared resource pools are too inefficient and full of noisy neighbor problems, so you'd never be able to move your core infrastructure to the same environment. Furthering the dilemma, you have to capitalize on the assets you already have that are still of use to the company.

You finally have the flexibility to slowly transition your environment to a scalable, flexible cloud environment without sacrificing. While the initial setup phases for a hybrid environment may seem arduous, Rome wasn't built in a day, so you shouldn't feel pressure to rush the construction of your IT environment. Here are a few key points to consider when adopting a hybrid model that will make life easier:

  • Keep it simple. Don't overcomplicate your environment. Keep networks, topologies and methodologies simple, and they'll be much more manageable and scalable.
  • Keep it secure. Simple, robust security principles will reduce your deployment timeframe and reduce attack points.
  • Keep it sane. Hybrid mixes the best of both worlds, so chose the best assets to move over. "Best" does not necessarily mean "easiest" or "cheapest" workload, but it doesn't exclude those workloads either.

With this in mind, you're ready to take on a hybrid approach for your infrastructure. There's no certification for when your company finally becomes a "cloud company." The moment you start leveraging off-premise resources, you've got a hybrid environment, and you can adjust your mix of on-premise, off-premise, virtual and bare metal resources as your business needs change and evolve.

-Jeff Klink

Jeff Klink is a senior technical staff member (STSM) with IBM Canada.

October 22, 2013

JumpCloud: Tech Partner Spotlight

We invite each of our featured SoftLayer Tech Marketplace Partners to contribute a guest post to the SoftLayer Blog, and this week, we're happy to welcome David Campbell from JumpCloud. JumpCloud is an automated SaaS-based offering that automates the manual, tedious system administration tasks for DevOps and IT pros. It works with your provisioning to complete your operations set by automating server maintenance, management, monitoring, and security.

User Management in a DevOps World

Maybe you're a developer who's recently been given responsibility for managing production infrastructure at your company. Or maybe you're a career SysAdmin whose boss read the DevOps Cookbook and decided that it's time for you to learn to embrace DevOps and start treating your configuration as code and automating everything. DevOps promises to change the way organizations develop, operate and maintain applications and IT infrastructure, both on-premise and in the cloud. However you came upon it, you're now firmly entrenched in the world of DevOps.

No matter what your background, you're probably not alone in terms of needing access to the servers in your environment. Which brings us to the topic of this post. It's bad practice to use a shared "root" account to manage your systems and especially to run your application. So you want to create and manage separate user accounts. This is easy enough to do manually when you have only one or two admins and just a couple of servers. But in today's elastic, auto-scaling environments, you may have two servers at 9am and 1200 servers at 3pm.

So what to do?

In short, what you want is a method by which you can have each admin within your organization have their own user account on all of the systems within your organization to which they should have access. You want to require the admins to use ssh keys to authenticate to the servers, as requiring key based auth will make it impossible for brute force attackers to guess passwords in order to compromise your systems. You likely will want to grant "sudo" access to certain admins, and have them prove their identity to the system before executing privileged commands by entering their password. You may want to require multi factor authentication for admin shell access to especially critical systems, like production database servers.

Access needs to be granted when new admins join your team, and when new servers are brought up in the environment. That's where it gets complicated. Maybe you don't want the junior admin having full access to the customer database system? Access also needs to be removed when somebody inevitably leaves the company, sometimes unexpectedly.

There are a lot of DevOps friendly ways to automate the process of provisioning and deprovisioning user accounts. Techniques can be as simple as using rsync to copy "shadow files" from one system in the environment to all systems in the environment, though this can be tricky to manage in auto-scaling environments.

More advanced approaches involve using configuration management tools like Puppet or Chef to manage local user accounts on managed systems. These tools have native capability for user management, but do not provide any centralized audit trail about who is doing what on your servers. They also make it difficult for the user to select their own initial credentials, or change them down the road should they be forgotten or compromised. Using configuration management tools to manage user accounts also requires "code changes" to add or remove users, and changes can take 30 minutes or more to propagate through your whole environment.

If you want to automate and streamline your server user management process or you're interested in enhancing the security of your infrastructure, visit JumpCloud. We can help make quick work of tedious user management and security issues so that you can get back to growing your business.

-David Campbell, JumpCloud

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
October 16, 2013

Tips and Tricks: Troubleshooting Email Issues

Working in support, one of the most common issues we troubleshoot is a customer's ability to receive email. Depending on email server, this can be a headache and a half to figure out, but more often than not, we're able to fix the problem with one of only a few simple solutions. Because the SoftLayer Blog audience loves technical tips and tricks, I thought I'd share a few easy steps that make pinpointing the root cause of email issues much easier.

Before you gear up to go into battle, check the that server is not out of disk space on /var and that it is not in a read only state. That precursory step may seem silly, but Occam's Razor often holds true in technical troubleshooting. Once you verify that those two common problems aren't causing your email problems, the next step is to determine whether the email issues are server-wide or isolated to one mail account/domain. To do that, the first thing you need to do is make sure that the IMAP and POP services are responding.

Check IMAP and POP Services

The universal approach to checking IMAP and POP services is to use telnet:

telnet <serverip> 110
telnet <serverip> 143

If either of those commands fail, you're able to pinpoint which service to check on your server.

For most variants of Linux, you can check both services with a single command: netstat -plan|egrep -i "110|143". The resulting output will show if the services are listening and which process is doing the listening. In Windows, you can run a similar command from a command prompt: netstat -anb|find "LISTEN"| findstr "110 143".

If the ports are listening, and you're able to connect to them over telnet, your next stop should be your server's error logs.

Check Error Logs

You want to look for any mail errors that might clue you into the root cause of your email issues. In Linux, you can check /var/log/maillog, and in Windows, you can filter eventvwr.msc for mail only. If there are errors, a simple search will highlight them quickly.

If there are no errors, it's time to dig into the mail queue directly.

Check the Mail Queue

Depending on the mail server you use, the commands here are going to vary. Here are a few examples of how we'd investigate the most common mail servers we encounter:

QMail

Display the mail queue: /var/qmail/bin/qmail-qread
Display the number of messages in the queue: /var/qmail/bin/qmail-qstat
Reference article: Gaining Control Over the QMail Queue

Sendmail

Display the mail queue: sendmail -bp or mailq
Display the number of messages in the queue: mailq –OmaxQueueRunSize=1
Reference article: Quick Sendmail Cheatsheet

Exim

Display the mail queue: exim -bp
Display the number of messages in the queue: exim -bpc
Reference article: Exim cheatsheet

MailEnable

MailEnable users can can check to see that messages are moving by opening the mail directory:
Program Files\MailEnable\Queues\SMTP\Inbound\Messages
Reference article: How to diagnose inbound message delivery delays

With these commands, you can filter through the email queues to see whether any of them are for the users or domains you're having problems with. If nothing obvious presents itself at that point, it's time for some active testing.

Active Testing

Send an email to your mailserver from an external mailserver (anything will do as long as it's not on the same server). Watch for logging of the email as it's delivered:
tail -f maillog
On busy mailservers you might add |grep youremailid or simply look for a new message in the directory where the email will be stored.

The your primary goal in troubleshooting your email issues in this way is to isolate the root cause of your problem so that you can fix it more quickly. SoftLayer customers have direct access to our support team to help you through this process, but it's always nice to keep a quick reference like this in your back pocket to be able to pinpoint the problem yourself.

-Bill

October 14, 2013

Product Spotlight: Vyatta Network Gateway Appliance

In the wake of our recent Vyatta network gateway appliance product launch, I thought I'd address some of the most common questions customers have asked me about the new offering. With inquiries spanning the spectrum from broad and general to detailed and specific, I might not be able to cover everything in this blog post, but at the very least, it should give a little more context for our new network gateway offering.

To begin, let's explore the simplest question I've been asked: "What is a network gateway?" A network gateway provides tools to manage traffic into and out of one or more VLANs (Virtual Local Area Networks). The network gateway serves a customer-configurable routing device that sits in front of designated VLANs. The servers in those VLANs route through the network gateway appliance as their first hop instead of Front-end Customer Routers (FCR) or Back-end Customer Routers (BCR). From an infrastructure perspective, SoftLayer's network gateway offering consists of a single server, and in the future, the offering will be expanded to multi-server configurations to support high availability needs and larger clustered configurations.

The general function of a network gateway may seem a little abstract, so let's look at a couple real world use cases to see how you can put that functionality to work in your own cloud environment.

Example 1: Complex Traffic Management
You have a multi-server cloud environment and a complex set of firewall rules that allow certain types of traffic to certain servers from specific addresses. Without a network gateway, you would need to configure multiple hardware and software firewalls throughout your topology and maintain multiple rules sets, but with the network gateway appliance, you streamline your configuration into a single point of control on both the public and private networks.

After you order a gateway appliance in the SoftLayer portal and configure which VLANs route through the appliance, the process of configuring the device is simple: You define your production, development and QA environments with distinct traffic rules, and the network gateway handles the traffic segmentation. If you wanted to create your own VPN to connect your hosted environment to your office or in-house data center, that configuration is quick and easy as well. The high-touch challenge of managing several sets of network rules across multiple devices is simplified and streamlined.

Example 2: Creating a Static NAT
You want to create a static NAT (Network Address Translation) so that you can direct traffic through a public IP address to an internal IP address. With the IPv4 address pool dwindling and new allocations being harder to come by, this configuration is becoming extremely popular to accommodate users who can't yet reach IPv6 addresses. This challenge would normally require a significant level of effort of even the most seasoned systems administrator, but with the gateway appliance, it's a painless process.

In addition to the IPv4 address-saving benefits, your static NAT adds a layer of protection for your internal web servers from the public network, and as we discussed in the first example, your gateway device also serves as a single configuration point for both inbound and outbound firewall rules.

If you have complex network-related needs, and you want granular control of the traffic to and from your servers, a gateway appliance might be the perfect tool for you. You get the control you want and save yourself a significant amount of time and effort configuring and tweaking your environment on-the-fly. You can terminate IPSec VPN tunnels, execute your own network address translation, and run diagnostic commands such as traffic monitoring (tcpdump) on your global environment. And in addition to that, your gateway serves as a single point of contact to configure sophisticated firewall rules!

If you want to learn more about the gateway appliance, check out KnowledgeLayer or contact our friendly sales team directly with your questions: sales@softlayer.com

-Ben

October 3, 2013

Improving Communications for Customer-Affecting Events

Service disruptions are never a good thing. Though SoftLayer invests extensively in design, equipment, and personnel training to reduce the risk of disruptions to our customers, in the technology world there are times where scheduled events or unplanned incidents are inevitable. During those times, we understand that restoring service is top priority, and almost as important is communicating to customers regarding the cause of the incident and the current status of our work to resolve it.

To date we've used a combination of tickets, emails, forum posts, portal "yellow" notifications, as well as RSS and Twitter feeds to provide status updates during service-affecting events. Many of these methods require customers to "come and get it," so we've been working on a more targeted, proactive approach to disseminating information.

I'm excited to report that our Development and Operations teams have collaborated on new functionality in the SoftLayer portal that will improve the way we share information with customers about unplanned infrastructure troubles or upcoming planned maintenances. With our new Event Communications toolset, we're able to pinpoint the accounts affected by an event and update users who opt-in to receive notifications about how these events may impact their services.

Notifications

As the development work is finalized, we plan to roll out a few phases of improvements. The first phase of implementation, which is ready today, enables email alerts for unplanned incidents, and any portal user account can opt-in to receive them. These emails provide details about the impact and current status of an unplanned incident in progress (UIP). In this phase, notifications can be sent for devices such as physical servers, CCIs and shared SLB VIPs, and we will be adding additional services over time.

In future phases of this project, we plan to include:

  • A new "Event" section of the Customer Portal which will allow customers to browse upcoming scheduled maintenances or current/recent unplanned incidents which may impact their services. In the past, we generated tickets for scheduled maintenances, so separating these event notifications will improve customer visibility.
  • Enhanced visibility for events in our mobile apps (phone/tablet).
  • Updates to affected services for a given event as customers add / change services.
  • Notification of newly added or newly updated events that have not been read by the user (similar email "inbox" functionality) in the portal.
  • Identification of any related current or recent events as a customer begins to open a ticket in the portal.
  • Reminders of upcoming scheduled maintenances along with progress updates to the event notification throughout the maintenance in some cases.
  • Improved ability to correlate specific incidents to customer service troubles.
  • Dissemination of RFO (reason-for-outage) statements to customers following a post-incident review of an unplanned service disruption.

Since we respect our customers' inboxes, these notifications will only be sent to user accounts that have opted in. If you'd like to receive them, simply log into the Customer Portal and navigate to "Notification Subscriptions" under the "Administration" menu (direct link). From that page, individual users can control event subscriptions, and portal logins that have administrative control over multiple users on the account can control the opt-in for themselves and their downstream users. For a more detailed walkthrough of the opt-in process, visit the KnowledgeLayer: "Update Subscription Settings for the Event Management System"

The Network Operations Center has already begun using this customer notification toolset for customer-affecting events, so we recommend that you opt-in as soon as possible to benefit from this new functionality.

-Dani

September 30, 2013

The Economics of Cloud Computing: If It Seems Too Good to Be True, It Probably Is

One of the hosts of a popular Sirius XM radio talk show was recently in the market to lease a car, and a few weeks ago, he shared an interesting story. In his research, he came across an offer he came across that seemed "too good to be true": Lease a new Nissan Sentra with no money due at signing on a 24-month lease for $59 per month. The car would as "base" as a base model could be, but a reliable car that can be driven safely from Point A to Point B doesn't need fancy "upgrades" like power windows or an automatic transmission. Is it possible to lease new car for zero down and $59 per month? What's the catch?

After sifting through all of the paperwork, the host admitted the offer was technically legitimate: He could lease a new Nissan Sentra for $0 down and $59 per month for two years. Unfortunately, he also found that "lease" is just about the extent of what he could do with it for $59 per month. The fine print revealed that the yearly mileage allowance was 0 (zero) — he'd pay a significant per-mile rate for every mile he drove the car.

Let's say the mileage on the Sentra was charged at $0.15 per mile and that the car would be driven a very-conservative 5,000 miles per year. At the end of the two-year lease, the 10,000 miles on the car would amount to a $1,500 mileage charge. Breaking that cost out across the 24 months of the lease, the effective monthly payment would be around $121, twice the $59/mo advertised lease price. Even for a car that would be used sparingly, the numbers didn't add up, so the host wound up leasing a nicer car (that included a non-zero mileage allowance) for the same monthly cost.

The "zero-down, $59/mo" Sentra lease would be a fantastic deal for a person who wants the peace of mind of having a car available for emergency situations only, but for drivers who put the national average of 15,000 miles per year, the economic benefit of such a low lease rate is completely nullified by the mileage cost. If you were in the market to lease a new car, would you choose that Sentra deal?

At this point, you might be wondering why this story found its way onto the SoftLayer Blog, and if that's the case, you don't see the connection: Most cloud computing providers sell cloud servers like that car lease.

The "on demand" and "pay for what you use" aspects of cloud computing make it easy for providers to offer cloud servers exclusively as short-term utilities: "Use this cloud server for a couple of days (or hours) and return it to us. We'll just charge you for what you use." From a buyer's perspective, this approach is easy to justify because it limits the possibility of excess capacity — paying for something you're not using. While that structure is effective (and inexpensive) for customers who sporadically spin up virtual server instances and turn them down quickly, for the average customer looking to host a website or application that won't be turned off in a given month, it's a different story.

Instead of discussing the costs in theoretical terms, let's look at a real world example: One of our competitors offers an entry-level Linux cloud server for just over $15 per month (based on a 730-hour month). When you compare that offer to SoftLayer's least expensive monthly virtual server instance (@ $50/mo), you might think, "OMG! SoftLayer is more than three times as expensive!"

But then you remember that you actually want to use your server.

You see, like the "zero down, $59/mo" car lease that doesn't include any mileage, the $15/mo cloud server doesn't include any bandwidth. As soon as you "drive your server off the lot" and start using it, that "fantastic" rate starts becoming less and less fantastic. In this case, outbound bandwidth for this competitor's cloud server starts at $0.12/GB and is applied to the server's first outbound gigabyte (and every subsequent gigabyte in that month). If your server sends 300GB of data outbound every month, you pay $36 in bandwidth charges (for a combined monthly total of $51). If your server uses 1TB of outbound bandwidth in a given month, you end up paying $135 for that "$15/mo" server.

Cloud servers at SoftLayer are designed to be "driven." Every monthly virtual server instance from SoftLayer includes 1TB of outbound bandwidth at no additional cost, so if your cloud server sends 1TB of outbound bandwidth, your total charge for the month is $50. The "$15/mo v. $50/mo" comparison becomes "$135/mo v. $50/mo" when we realize that these cloud servers don't just sit in the garage. This illustration shows how the costs compare between the two offerings with monthly bandwidth usage up to 1.3TB*:

Cloud Cost v Bandwidth

*The graphic extends to 1.3TB to show how SoftLayer's $0.10/GB charge for bandwidth over the initial 1TB allotment compares with the competitor's $0.12/GB charge.

Most cloud hosting providers sell these "zero down, $59/mo car leases" and encourage you to window-shop for the lowest monthly price based on number of cores, RAM and disk space. You find the lowest price and mentally justify the cost-per-GB bandwidth charge you receive at the end of the month because you know that you're getting value from the traffic that used that bandwidth. But you'd be better off getting a more powerful server that includes a bandwidth allotment.

As a buyer, it's important that you make your buying decisions based on your specific use case. Are you going to spin up and spin down instances throughout the month or are you looking for a cloud server that is going to stay online the entire month? From there, you should estimate your bandwidth usage to get an idea of the actual monthly cost you can expect for a given cloud server. If you don't expect to use 300GB of outbound bandwidth in a given month, your usage might be best suited for that competitor's offering. But then again, it's probably worth mentioning that that SoftLayer's base virtual server instance has twice the RAM, more disk space and higher-throughput network connections than the competitor's offering we compared against. Oh yeah, and all those other cloud differentiators.

-@khazard

Pages

Subscribe to news