customer-service

November 11, 2013

Sysadmin Tips and Tricks - Using the ‘for’ Loop in Bash

Ever have a bunch of files to rename or a large set of files to move to different directories? Ever find yourself copy/pasting nearly identical commands a few hundred times to get a job done? A system administrator's life is full of tedious tasks that can be eliminated or simplified with the proper tools. That's right ... Those tedious tasks don't have to be executed manually! I'd like to introduce you to one of the simplest tools to automate time-consuming repetitive processes in Bash — the for loop.

Whether you have been programming for a few weeks or a few decades, you should be able to quickly pick up on how the for loop works and what it can do for you. To get started, let's take a look at a few simple examples of what the for loop looks like. For these exercises, it's always best to use a temporary directory while you're learning and practicing for loops. The command is very powerful, and we wouldn't want you to damage your system while you're still learning.

Here is our temporary directory:

rasto@lmlatham:~/temp$ ls -la
total 8
drwxr-xr-x 2 rasto rasto 4096 Oct 23 15:54 .
drwxr-xr-x 34 rasto rasto 4096 Oct 23 16:00 ..
rasto@lmlatham:~/temp$

We want to fill the directory with files, so let's use the for loop:

rasto@lmlatham:~/temp$ for cats_are_cool in {a..z}; do touch $cats_are_cool; done;
rasto@lmlatham:~/temp$

Note: This should be typed all in one line.

Here's the result:

rasto@lmlatham:~/temp$ ls -l
total 0
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 a
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 b
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 c
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 d
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 e
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 f
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 g
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 h
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 i
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 j
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 k
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 l
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 m
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 n
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 o
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 p
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 q
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 r
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 s
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 t
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 u
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 v
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 w
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 x
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 y
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 z
rasto@lmlatham:~/temp$

How did that simple command populate the directory with all of the letters in the alphabet? Let's break it down.

for cats_are_cool in {a..z}

The for is the command we are running, which is built into the Bash shell. cats_are_cool is a variable we are declaring. The specific name of the variable can be whatever you want it to be. Traditionally people often use f, but the variable we're using is a little more fun. Hereafter, our variable will be referred to as $cats_are_cool (or $f if you used the more boring "f" variable). Aside: You may be familiar with declaring a variable without the $ sign, and then using the $sign to invoke it when declaring environment variables.

When our command is executed, the variable we declared in {a..z}, will assume each of the values of a to z. Next, we use the semicolon to indicate we are done with the first phase of our for loop. The next part starts with do, which say for each of a–z, do <some thing>. In this case, we are creating files by touching them via touch $cats_are_cool. The first time through the loop, the command creates a, the second time through b and so forth. We complete that command with a semicolon, then we declare we are finished with the loop with "done".

This might be a great time to experiment with the command above, making small changes, if you wish. Let's do a little more. I just realized that I made a mistake. I meant to give the files a .txt extension. This is how we'd make that happen:

for dogs_are_ok_too in {a..z}; do mv $dogs_are_ok_too $dogs_are_ok_too.txt; done;
Note: It would be perfectly okay to re-use $cats_are_cool here. The variables are not persistent between executions.

As you can see, I updated the command so that a would be renamed a.txt, b would be renamed b.txt and so forth. Why would I want to do that manually, 26 times? If we check our directory, we see that everything was completed in that single command:

rasto@lmlatham:~/temp$ ls -l
total 0
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 a.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 b.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 c.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 d.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 e.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 f.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 g.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 h.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 i.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 j.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 k.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 l.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 m.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 n.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 o.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 p.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 q.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 r.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 s.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 t.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 u.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 v.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 w.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 x.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 y.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 z.txt
rasto@lmlatham:~/temp$

Now we have files, but we don't want them to be empty. Let's put some text in them:

for f in `ls`; do cat /etc/passwd > $f; done

Note the backticks around ls. In Bash, backticks mean, "execute this and return the results," so it's like you executed ls and fed the results to the for loop! Next, cat /etc/passwd is redirecting the results to $f, in filenames a.txt, b.txt, etc. Still with me?

So now I've got a bunch of files with copies of /etc/passwd in them. What if I never wanted files for a, g, or h? First, I'd get a list of just the files I want to get rid of:

rasto@lmlatham:~/temp$ ls | egrep 'a|g|h'
a.txt
g.txt
h.txt

Then I could plug that command into the for loop (using backticks again) and do the removal of those files:

for f in `ls | egrep 'a|g|h'`; do rm $f; done

I know these examples don't seem very complex, but they give you a great first-look at the kind of functionality made possible by the for loop in Bash. Give it a whirl. Once you start smartly incorporating it in your day-to-day operations, you'll save yourself massive amounts of time ... Especially when you come across thousands or tens of thousands of very similar tasks.

Don't do work a computer should do!

-Lee

November 1, 2013

Paving the Way for the DevOps Revolution

The traditional approach to software development has been very linear: Your development team codes a release and sends it over to a team of quality engineers to be tested. When everything looks good, the code gets passed over to IT operations to be released into production. Each of these teams operates within its own silo and makes changes independent of the other groups, and at any point in the process, it's possible a release can get kicked back to the starting line. With the meteoric rise of agile development — a development philosophy geared toward iterative and incremental code releases — that old waterfall-type development approach is being abandoned in favor of a DevOps approach.

DevOps — a fully integrated development and operations approach — streamlines the software development process in an agile development environment by consolidating development, testing and release responsibilities into one cohesive team. This way, ideas, features and other developments can be released very quickly and iteratively to respond to changing and growing market needs, avoiding the delays of long, drawn-out and timed dev releases.

To help you visualize the difference between the traditional approach and the DevOps approach, take a look at these two pictures:

Traditional Waterfall Development
SoftLayer DevOps Blog

DevOps
SoftLayer DevOps Blog

Unfortunately, many businesses struggle to adopt the DevOps approach because they simply update their org chart by merging their traditional teams, but their development philosophy doesn't change at the same time. As a result, I've encountered a lot of companies who have been jaded by previous attempts to move to a DevOps model, and I'm not alone. There is a significant need in the marketplace for some good old fashioned DevOps expertise.

A couple months ago, my friend Raj Bhargava pinged me with a phenomenal idea to put on a DevOps "un-conference" in Boulder, Colorado, to address the obvious need he's observed for DevOps education and best practices. Raj is a serial, multiple-exit entrepreneur from Boulder, and he is the co-founder and CEO of a DevOps-focused startup there called JumpCloud. When he asked if I would like to co-chair the event and have SoftLayer as a headline sponsor alongside JumpCloud, the answer was a quick and easy "Yes!"

Sure, there have been other DevOps-related conferences around the world, but ours was designed to be different from the outset. As strange as it may sound, half of the conference intentionally occurred outside of the conference: One of our highest priorities was to strike up conversations between the participants before, during and after the event. If we're putting on a conference to encourage a collaborative development approach, it would be counterproductive for us to use a top-down, linear approach to engaging the attendees, right?

I'm happy to report that this inaugural attempt of our untested concept was an amazing success. We kept the event private for our first run at the concept, but the event was bursting at the seams with brilliant developers and tech influencers. Brad Feld and our friends from the Foundry Group invited all of their portfolio CEO's and CTO's. David Cohen, co-founder of Techstars and head honcho at Bullet Time Ventures did the same. JumpCloud and SoftLayer helped round out the attendee list with a few of our most innovative partners as well as a few of technologists from within our own organizations. It was an incredible mix of super-smart tech pros, business leaders and VC's from all over the world.

With such a diverse group of attendees, the conversations at the event were engaging, energizing and profound. We discussed everything from how startups should incorporate automation into their business plans at the outset to how the practice of DevOps evolves as companies scale quickly. At the end of the day, we brought all of those theoretical discussions back down to the ground by sharing case studies of real companies that have had unbelievable success in incorporating DevOps into their businesses. I had the honor of wrapping up the event as moderator of a panel with Jon Prall from Sendgrid, Scott Engstrom from Gnip and Richard Miller of Mocavo, and I couldn't have been happier with the response.

I'd like to send a big thanks to everyone who participated, especially our cosponsors — JumpCloud, VictorOps, Authentic8, DH Capital, SendGrid, Cooley, Pivot Desk, SVP and Pantheon.

I'm looking forward to opening this up to the world next year!

-@PaulFord

October 24, 2013

Why Hybrid? Why Now?

As off-premise cloud computing adoption continues to grow in a non-linear fashion, a growing number of businesses running in-house IT environments are debating whether they should get on board as well. If you've been part of any of those conversations, you've tried to balance the hype with the most significant questions for your business: "How do we know if our company is ready to try cloud resources? And if we're ready, how do we actually get started?"

Your company is cloud-ready as soon as you understand and accept the ramifications of remote resources and scaling in the cloud model, and it doesn't have to be an "all-in" decision. If you need certain pieces of your infrastructure to reside in-house, you can start evaluating the cloud with workloads that don't have to be hosted internally. The traditional IT term for this approach is "hybrid," but that term might cause confusion these days.

In the simplest sense, a hybrid model is one in which a workload is handled by one or more non-heterogeneous elements. In the traditional IT sense, those non-heterogeneous elements are two distinct operating environments (on-prem and off-prem). In SoftLayer's world, a hybrid environment leverages different heterogeneous elements: Bare metal and virtual server instances, delivered in the cloud.

Figure 1: Traditional Hybrid - On-Premise to Cloud (Through VPN, SSL or Open Communications)

Traditional Hybrid

Figure 2: SoftLayer's Hybrid - Dedicated + Virtual

SoftLayer Hybrid

Because SoftLayer's "hybrid" and traditional IT's "hybrid" are so different, it's easy to understand the confusion in the marketplace: If a hybrid environment is generally understood to involve the connection of on-premise infrastructure to cloud resources, SoftLayer's definition seems contrarian. Actually, the use of the term is a lot more similar than I expected. In a traditional hosting environment, most businesses think in terms of bare metal (dedicated) servers, and when those businesses move "to the cloud," they're generally thinking in terms of virtualized server instances. So SoftLayer's definition of a hybrid environment is very consistent with the market definition ... It's just all hosted off-premise.

The ability to have dedicated resources intermixed with virtual resources means that workloads from on-premise hypervisors that require native or near-native performance can be moved immediately. And because those workloads don't have to be powered by in-house servers, a company's IT infrastructure moves a CapEx to an OpEx model. In the past, adopting infrastructure as a service (IaaS) involved shoehorning workloads into whichever virtual resource closest matched an existing environment, but those days are gone. Now, on-premise resources can be replicated (and upgraded) on demand in a single off-premise environment, leveraging a mix of virtual and dedicated resources.

SoftLayer's environment simplifies the process for businesses looking to move IT infrastructure off-premise. Those businesses can start by leveraging virtual server instances in a cloud environment while maintaining the in-house resources for certain workloads, and when those in-house resources reach the end of their usable life (or need an upgrade), the businesses can shift those workloads onto bare metal servers in the same cloud environment as their virtual server instances.

The real-world applications are pretty obvious: Your company is considering moving part of a workload to cloud in order to handle peak season loads at the end of the year. You've contemplated transitioning parts of your environment to the cloud, but you've convinced yourself that shared resource pools are too inefficient and full of noisy neighbor problems, so you'd never be able to move your core infrastructure to the same environment. Furthering the dilemma, you have to capitalize on the assets you already have that are still of use to the company.

You finally have the flexibility to slowly transition your environment to a scalable, flexible cloud environment without sacrificing. While the initial setup phases for a hybrid environment may seem arduous, Rome wasn't built in a day, so you shouldn't feel pressure to rush the construction of your IT environment. Here are a few key points to consider when adopting a hybrid model that will make life easier:

  • Keep it simple. Don't overcomplicate your environment. Keep networks, topologies and methodologies simple, and they'll be much more manageable and scalable.
  • Keep it secure. Simple, robust security principles will reduce your deployment timeframe and reduce attack points.
  • Keep it sane. Hybrid mixes the best of both worlds, so chose the best assets to move over. "Best" does not necessarily mean "easiest" or "cheapest" workload, but it doesn't exclude those workloads either.

With this in mind, you're ready to take on a hybrid approach for your infrastructure. There's no certification for when your company finally becomes a "cloud company." The moment you start leveraging off-premise resources, you've got a hybrid environment, and you can adjust your mix of on-premise, off-premise, virtual and bare metal resources as your business needs change and evolve.

-Jeff Klink

Jeff Klink is a senior technical staff member (STSM) with IBM Canada.

October 22, 2013

JumpCloud: Tech Partner Spotlight

We invite each of our featured SoftLayer Tech Marketplace Partners to contribute a guest post to the SoftLayer Blog, and this week, we're happy to welcome David Campbell from JumpCloud. JumpCloud is an automated SaaS-based offering that automates the manual, tedious system administration tasks for DevOps and IT pros. It works with your provisioning to complete your operations set by automating server maintenance, management, monitoring, and security.

User Management in a DevOps World

Maybe you're a developer who's recently been given responsibility for managing production infrastructure at your company. Or maybe you're a career SysAdmin whose boss read the DevOps Cookbook and decided that it's time for you to learn to embrace DevOps and start treating your configuration as code and automating everything. DevOps promises to change the way organizations develop, operate and maintain applications and IT infrastructure, both on-premise and in the cloud. However you came upon it, you're now firmly entrenched in the world of DevOps.

No matter what your background, you're probably not alone in terms of needing access to the servers in your environment. Which brings us to the topic of this post. It's bad practice to use a shared "root" account to manage your systems and especially to run your application. So you want to create and manage separate user accounts. This is easy enough to do manually when you have only one or two admins and just a couple of servers. But in today's elastic, auto-scaling environments, you may have two servers at 9am and 1200 servers at 3pm.

So what to do?

In short, what you want is a method by which you can have each admin within your organization have their own user account on all of the systems within your organization to which they should have access. You want to require the admins to use ssh keys to authenticate to the servers, as requiring key based auth will make it impossible for brute force attackers to guess passwords in order to compromise your systems. You likely will want to grant "sudo" access to certain admins, and have them prove their identity to the system before executing privileged commands by entering their password. You may want to require multi factor authentication for admin shell access to especially critical systems, like production database servers.

Access needs to be granted when new admins join your team, and when new servers are brought up in the environment. That's where it gets complicated. Maybe you don't want the junior admin having full access to the customer database system? Access also needs to be removed when somebody inevitably leaves the company, sometimes unexpectedly.

There are a lot of DevOps friendly ways to automate the process of provisioning and deprovisioning user accounts. Techniques can be as simple as using rsync to copy "shadow files" from one system in the environment to all systems in the environment, though this can be tricky to manage in auto-scaling environments.

More advanced approaches involve using configuration management tools like Puppet or Chef to manage local user accounts on managed systems. These tools have native capability for user management, but do not provide any centralized audit trail about who is doing what on your servers. They also make it difficult for the user to select their own initial credentials, or change them down the road should they be forgotten or compromised. Using configuration management tools to manage user accounts also requires "code changes" to add or remove users, and changes can take 30 minutes or more to propagate through your whole environment.

If you want to automate and streamline your server user management process or you're interested in enhancing the security of your infrastructure, visit JumpCloud. We can help make quick work of tedious user management and security issues so that you can get back to growing your business.

-David Campbell, JumpCloud

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
October 16, 2013

Tips and Tricks: Troubleshooting Email Issues

Working in support, one of the most common issues we troubleshoot is a customer's ability to receive email. Depending on email server, this can be a headache and a half to figure out, but more often than not, we're able to fix the problem with one of only a few simple solutions. Because the SoftLayer Blog audience loves technical tips and tricks, I thought I'd share a few easy steps that make pinpointing the root cause of email issues much easier.

Before you gear up to go into battle, check the that server is not out of disk space on /var and that it is not in a read only state. That precursory step may seem silly, but Occam's Razor often holds true in technical troubleshooting. Once you verify that those two common problems aren't causing your email problems, the next step is to determine whether the email issues are server-wide or isolated to one mail account/domain. To do that, the first thing you need to do is make sure that the IMAP and POP services are responding.

Check IMAP and POP Services

The universal approach to checking IMAP and POP services is to use telnet:

telnet <serverip> 110
telnet <serverip> 143

If either of those commands fail, you're able to pinpoint which service to check on your server.

For most variants of Linux, you can check both services with a single command: netstat -plan|egrep -i "110|143". The resulting output will show if the services are listening and which process is doing the listening. In Windows, you can run a similar command from a command prompt: netstat -anb|find "LISTEN"| findstr "110 143".

If the ports are listening, and you're able to connect to them over telnet, your next stop should be your server's error logs.

Check Error Logs

You want to look for any mail errors that might clue you into the root cause of your email issues. In Linux, you can check /var/log/maillog, and in Windows, you can filter eventvwr.msc for mail only. If there are errors, a simple search will highlight them quickly.

If there are no errors, it's time to dig into the mail queue directly.

Check the Mail Queue

Depending on the mail server you use, the commands here are going to vary. Here are a few examples of how we'd investigate the most common mail servers we encounter:

QMail

Display the mail queue: /var/qmail/bin/qmail-qread
Display the number of messages in the queue: /var/qmail/bin/qmail-qstat
Reference article: Gaining Control Over the QMail Queue

Sendmail

Display the mail queue: sendmail -bp or mailq
Display the number of messages in the queue: mailq –OmaxQueueRunSize=1
Reference article: Quick Sendmail Cheatsheet

Exim

Display the mail queue: exim -bp
Display the number of messages in the queue: exim -bpc
Reference article: Exim cheatsheet

MailEnable

MailEnable users can can check to see that messages are moving by opening the mail directory:
Program Files\MailEnable\Queues\SMTP\Inbound\Messages
Reference article: How to diagnose inbound message delivery delays

With these commands, you can filter through the email queues to see whether any of them are for the users or domains you're having problems with. If nothing obvious presents itself at that point, it's time for some active testing.

Active Testing

Send an email to your mailserver from an external mailserver (anything will do as long as it's not on the same server). Watch for logging of the email as it's delivered:
tail -f maillog
On busy mailservers you might add |grep youremailid or simply look for a new message in the directory where the email will be stored.

The your primary goal in troubleshooting your email issues in this way is to isolate the root cause of your problem so that you can fix it more quickly. SoftLayer customers have direct access to our support team to help you through this process, but it's always nice to keep a quick reference like this in your back pocket to be able to pinpoint the problem yourself.

-Bill

October 14, 2013

Product Spotlight: Vyatta Network Gateway Appliance

In the wake of our recent Vyatta network gateway appliance product launch, I thought I'd address some of the most common questions customers have asked me about the new offering. With inquiries spanning the spectrum from broad and general to detailed and specific, I might not be able to cover everything in this blog post, but at the very least, it should give a little more context for our new network gateway offering.

To begin, let's explore the simplest question I've been asked: "What is a network gateway?" A network gateway provides tools to manage traffic into and out of one or more VLANs (Virtual Local Area Networks). The network gateway serves a customer-configurable routing device that sits in front of designated VLANs. The servers in those VLANs route through the network gateway appliance as their first hop instead of Front-end Customer Routers (FCR) or Back-end Customer Routers (BCR). From an infrastructure perspective, SoftLayer's network gateway offering consists of a single server, and in the future, the offering will be expanded to multi-server configurations to support high availability needs and larger clustered configurations.

The general function of a network gateway may seem a little abstract, so let's look at a couple real world use cases to see how you can put that functionality to work in your own cloud environment.

Example 1: Complex Traffic Management
You have a multi-server cloud environment and a complex set of firewall rules that allow certain types of traffic to certain servers from specific addresses. Without a network gateway, you would need to configure multiple hardware and software firewalls throughout your topology and maintain multiple rules sets, but with the network gateway appliance, you streamline your configuration into a single point of control on both the public and private networks.

After you order a gateway appliance in the SoftLayer portal and configure which VLANs route through the appliance, the process of configuring the device is simple: You define your production, development and QA environments with distinct traffic rules, and the network gateway handles the traffic segmentation. If you wanted to create your own VPN to connect your hosted environment to your office or in-house data center, that configuration is quick and easy as well. The high-touch challenge of managing several sets of network rules across multiple devices is simplified and streamlined.

Example 2: Creating a Static NAT
You want to create a static NAT (Network Address Translation) so that you can direct traffic through a public IP address to an internal IP address. With the IPv4 address pool dwindling and new allocations being harder to come by, this configuration is becoming extremely popular to accommodate users who can't yet reach IPv6 addresses. This challenge would normally require a significant level of effort of even the most seasoned systems administrator, but with the gateway appliance, it's a painless process.

In addition to the IPv4 address-saving benefits, your static NAT adds a layer of protection for your internal web servers from the public network, and as we discussed in the first example, your gateway device also serves as a single configuration point for both inbound and outbound firewall rules.

If you have complex network-related needs, and you want granular control of the traffic to and from your servers, a gateway appliance might be the perfect tool for you. You get the control you want and save yourself a significant amount of time and effort configuring and tweaking your environment on-the-fly. You can terminate IPSec VPN tunnels, execute your own network address translation, and run diagnostic commands such as traffic monitoring (tcpdump) on your global environment. And in addition to that, your gateway serves as a single point of contact to configure sophisticated firewall rules!

If you want to learn more about the gateway appliance, check out KnowledgeLayer or contact our friendly sales team directly with your questions: sales@softlayer.com

-Ben

October 3, 2013

Improving Communications for Customer-Affecting Events

Service disruptions are never a good thing. Though SoftLayer invests extensively in design, equipment, and personnel training to reduce the risk of disruptions to our customers, in the technology world there are times where scheduled events or unplanned incidents are inevitable. During those times, we understand that restoring service is top priority, and almost as important is communicating to customers regarding the cause of the incident and the current status of our work to resolve it.

To date we've used a combination of tickets, emails, forum posts, portal "yellow" notifications, as well as RSS and Twitter feeds to provide status updates during service-affecting events. Many of these methods require customers to "come and get it," so we've been working on a more targeted, proactive approach to disseminating information.

I'm excited to report that our Development and Operations teams have collaborated on new functionality in the SoftLayer portal that will improve the way we share information with customers about unplanned infrastructure troubles or upcoming planned maintenances. With our new Event Communications toolset, we're able to pinpoint the accounts affected by an event and update users who opt-in to receive notifications about how these events may impact their services.

Notifications

As the development work is finalized, we plan to roll out a few phases of improvements. The first phase of implementation, which is ready today, enables email alerts for unplanned incidents, and any portal user account can opt-in to receive them. These emails provide details about the impact and current status of an unplanned incident in progress (UIP). In this phase, notifications can be sent for devices such as physical servers, CCIs and shared SLB VIPs, and we will be adding additional services over time.

In future phases of this project, we plan to include:

  • A new "Event" section of the Customer Portal which will allow customers to browse upcoming scheduled maintenances or current/recent unplanned incidents which may impact their services. In the past, we generated tickets for scheduled maintenances, so separating these event notifications will improve customer visibility.
  • Enhanced visibility for events in our mobile apps (phone/tablet).
  • Updates to affected services for a given event as customers add / change services.
  • Notification of newly added or newly updated events that have not been read by the user (similar email "inbox" functionality) in the portal.
  • Identification of any related current or recent events as a customer begins to open a ticket in the portal.
  • Reminders of upcoming scheduled maintenances along with progress updates to the event notification throughout the maintenance in some cases.
  • Improved ability to correlate specific incidents to customer service troubles.
  • Dissemination of RFO (reason-for-outage) statements to customers following a post-incident review of an unplanned service disruption.

Since we respect our customers' inboxes, these notifications will only be sent to user accounts that have opted in. If you'd like to receive them, simply log into the Customer Portal and navigate to "Notification Subscriptions" under the "Administration" menu (direct link). From that page, individual users can control event subscriptions, and portal logins that have administrative control over multiple users on the account can control the opt-in for themselves and their downstream users. For a more detailed walkthrough of the opt-in process, visit the KnowledgeLayer: "Update Subscription Settings for the Event Management System"

The Network Operations Center has already begun using this customer notification toolset for customer-affecting events, so we recommend that you opt-in as soon as possible to benefit from this new functionality.

-Dani

September 30, 2013

The Economics of Cloud Computing: If It Seems Too Good to Be True, It Probably Is

One of the hosts of a popular Sirius XM radio talk show was recently in the market to lease a car, and a few weeks ago, he shared an interesting story. In his research, he came across an offer he came across that seemed "too good to be true": Lease a new Nissan Sentra with no money due at signing on a 24-month lease for $59 per month. The car would as "base" as a base model could be, but a reliable car that can be driven safely from Point A to Point B doesn't need fancy "upgrades" like power windows or an automatic transmission. Is it possible to lease new car for zero down and $59 per month? What's the catch?

After sifting through all of the paperwork, the host admitted the offer was technically legitimate: He could lease a new Nissan Sentra for $0 down and $59 per month for two years. Unfortunately, he also found that "lease" is just about the extent of what he could do with it for $59 per month. The fine print revealed that the yearly mileage allowance was 0 (zero) — he'd pay a significant per-mile rate for every mile he drove the car.

Let's say the mileage on the Sentra was charged at $0.15 per mile and that the car would be driven a very-conservative 5,000 miles per year. At the end of the two-year lease, the 10,000 miles on the car would amount to a $1,500 mileage charge. Breaking that cost out across the 24 months of the lease, the effective monthly payment would be around $121, twice the $59/mo advertised lease price. Even for a car that would be used sparingly, the numbers didn't add up, so the host wound up leasing a nicer car (that included a non-zero mileage allowance) for the same monthly cost.

The "zero-down, $59/mo" Sentra lease would be a fantastic deal for a person who wants the peace of mind of having a car available for emergency situations only, but for drivers who put the national average of 15,000 miles per year, the economic benefit of such a low lease rate is completely nullified by the mileage cost. If you were in the market to lease a new car, would you choose that Sentra deal?

At this point, you might be wondering why this story found its way onto the SoftLayer Blog, and if that's the case, you don't see the connection: Most cloud computing providers sell cloud servers like that car lease.

The "on demand" and "pay for what you use" aspects of cloud computing make it easy for providers to offer cloud servers exclusively as short-term utilities: "Use this cloud server for a couple of days (or hours) and return it to us. We'll just charge you for what you use." From a buyer's perspective, this approach is easy to justify because it limits the possibility of excess capacity — paying for something you're not using. While that structure is effective (and inexpensive) for customers who sporadically spin up virtual server instances and turn them down quickly, for the average customer looking to host a website or application that won't be turned off in a given month, it's a different story.

Instead of discussing the costs in theoretical terms, let's look at a real world example: One of our competitors offers an entry-level Linux cloud server for just over $15 per month (based on a 730-hour month). When you compare that offer to SoftLayer's least expensive monthly virtual server instance (@ $50/mo), you might think, "OMG! SoftLayer is more than three times as expensive!"

But then you remember that you actually want to use your server.

You see, like the "zero down, $59/mo" car lease that doesn't include any mileage, the $15/mo cloud server doesn't include any bandwidth. As soon as you "drive your server off the lot" and start using it, that "fantastic" rate starts becoming less and less fantastic. In this case, outbound bandwidth for this competitor's cloud server starts at $0.12/GB and is applied to the server's first outbound gigabyte (and every subsequent gigabyte in that month). If your server sends 300GB of data outbound every month, you pay $36 in bandwidth charges (for a combined monthly total of $51). If your server uses 1TB of outbound bandwidth in a given month, you end up paying $135 for that "$15/mo" server.

Cloud servers at SoftLayer are designed to be "driven." Every monthly virtual server instance from SoftLayer includes 1TB of outbound bandwidth at no additional cost, so if your cloud server sends 1TB of outbound bandwidth, your total charge for the month is $50. The "$15/mo v. $50/mo" comparison becomes "$135/mo v. $50/mo" when we realize that these cloud servers don't just sit in the garage. This illustration shows how the costs compare between the two offerings with monthly bandwidth usage up to 1.3TB*:

Cloud Cost v Bandwidth

*The graphic extends to 1.3TB to show how SoftLayer's $0.10/GB charge for bandwidth over the initial 1TB allotment compares with the competitor's $0.12/GB charge.

Most cloud hosting providers sell these "zero down, $59/mo car leases" and encourage you to window-shop for the lowest monthly price based on number of cores, RAM and disk space. You find the lowest price and mentally justify the cost-per-GB bandwidth charge you receive at the end of the month because you know that you're getting value from the traffic that used that bandwidth. But you'd be better off getting a more powerful server that includes a bandwidth allotment.

As a buyer, it's important that you make your buying decisions based on your specific use case. Are you going to spin up and spin down instances throughout the month or are you looking for a cloud server that is going to stay online the entire month? From there, you should estimate your bandwidth usage to get an idea of the actual monthly cost you can expect for a given cloud server. If you don't expect to use 300GB of outbound bandwidth in a given month, your usage might be best suited for that competitor's offering. But then again, it's probably worth mentioning that that SoftLayer's base virtual server instance has twice the RAM, more disk space and higher-throughput network connections than the competitor's offering we compared against. Oh yeah, and all those other cloud differentiators.

-@khazard

September 24, 2013

Four Rules for Better Code Documentation

Last month, Jeremy shared some valuable information regarding technical debt on SLDN. In his post, he discussed how omitting pertinent information when you're developing for a project can cause more work to build up in the future. One of the most common areas developers overlook when it comes to technical debt is documentation. This oversight comes in two forms: A complete omission of any documentation and inadequate information when documentation does exist. Simply documenting the functionality of your code is a great start, but the best way to close the information gap and avoid technical debt that stems from documentation (or lack thereof) is to follow four simple rules.

1. Know Your Audience

When we're talking about code, it's safe to say you'll have a fairly technical audience; however, it is important to note the level of understanding your audience has on the code itself. While they should be able to grasp common terms and development concepts, they may be unfamiliar with the functionality you are programming. Because of this, it's a good idea to provide a link to an internal, technical knowledgebase or wiki that will provide in-depth details on the functionality of the technology they'll be working with. We try to use a combination of internal and external references that we think will provide the most knowledge to developers who may be looking at our code. Here's an example of that from our Dns_Domain class:

 * @SLDNDocumentation Service Overview <<< EOT
 * SoftLayer customers have the option of hosting DNS domains on the SoftLayer
 * name servers. Individual domains hosted on the SoftLayer name servers are
 * handled through the SoftLayer_Dns_Domain service.
 *
 * Domain changes are applied automatically by our nameservers, but changes may
 * not be received by the other name servers on the Internet for 72 hours after
 * your change. The SoftLayer_Dns_Domain service does not apply to customers who
 * run their own nameservers on servers purchased from SoftLayer.
 *
 * SoftLayer provides secondary DNS hosting services if you wish to maintain DNS
 * records on your name server, but have records replicated on SoftLayer's name
 * servers. Use the [[SoftLayer_Dns_Secondary]] service to manage secondary DNS
 * zones and transfers.
 * EOT
 *
 * @SLDNDocumentation Service External Link http://en.wikipedia.org/wiki/Domain_name_system Domain Name System at Wikipedia
 * @SLDNDocumentation Service External Link http://tools.ietf.org/html/rfc1035 RFC1035: Domain Names - Implementation and Specification at ietf.org
 * @SLDNDocumentation Service See Also SoftLayer_Dns_Domain_ResourceRecord
 * @SLDNDocumentation Service See Also SoftLayer_Dns_Domain_Reverse
 * @SLDNDocumentation Service See Also SoftLayer_Dns_Secondary
 *

Enabling the user to learn more about a topic, product, or even a specific call alleviates the need for users to ask multiple questions regarding the "what" or "why" and will also minimize the need for you to explain more basic concepts regarding the technology supported by your code.

2. Be Consistent - Terminology

There are two main areas developers should focus on when it comes to consistency: Formatting and terminology.

Luckily, formatting is pretty simple. Most languages have a set of standards attached to them that extend to the Docblock, which is where the documentation portion of the code normally takes place. Docblocks can be used to provide an overview of the class, identify authors or product owners and provide additional reference to those using the code. The example below uses PHP's standards for documentation tagging and allows users to quickly identify the parameters and return value for the createObject method in the Dns_Domain class:

*
     * @param string $objectType
     * @param object $templateObject
     *
     * @return SoftLayer_Dns_Domain
     */
   public static function createObject($objectType = __CLASS__, $templateObject)

Keeping consistent when it comes to terminology is a bit more difficult; especially if there have been no standards in place before. As an example, we can look to one of the most common elements of hosting: the server. Some people call this a "box," a "physical instance" or simply "hardware." The server may be a name server, a mail server, a database server or a web server.

If your company has adopted a term, use that term. If they haven't, decide on a term with your coworkers and stick to it. It's important to be as specific as possible in your documentation to avoid any confusion, and when you adopt specific terms in your documentation, you'll also find that this consistency will carry over into conversations and meetings. As a result, training new team members on your code will go more smoothly, and it will be easier for other people to assist in maintaining your code's documentation.

Bonus: It's much easier to search and replace when you only have to search for one term.

3. Forget What You Know About Your Code ... But Only Temporarily

Regardless of the industry, people who write their own documentation tend to omit pertinent information about the topic. When I train technical writers, I use the peanut butter and jelly example: How would you explain the process of making a peanut butter and jelly sandwich? Many would-be instructors omit things that would result in a very poorly made sandwich ... if one could be made at all. If you don't tell the reader to get the jelly from the cupboard, how can they put jelly on the sandwich? It's important to ask yourself when writing, "Is there anything that I take for granted about this piece of code that other users might need or want to know?"

Think about a coding example where a method calls one or more methods automatically in order to do its job or a method acts like another method. In our API, the createObjects method uses the logic of the createObject method that we just discussed. While some developers may pick up on the connection based on the method's name, it is still important to reference the similarities so they can better understand exactly how the code works. We do this in two ways: First, we state that createObjects follows the logic of createObject in the overview. Second, we note that createObject is a related method. The code below shows exactly how we've implemented this:

     * @SLDNDocumentation Service Description Create multiple domains at once.
     *
     * @SLDNDocumentation Method Overview <<< EOT
     * Create multiple domains on the SoftLayer name servers. Each domain record
     * passed to ''createObjects'' follows the logic in the SoftLayer_Dns_Domain
     * ''createObject'' method.
     * EOT
     *
     * @SLDNDocumentation Method Associated Method SoftLayer_Dns_Domain::createObject

4. Peer Review

The last rule, and one that should not be skipped, is to always have a peer look over your documentation. There really isn't a lot of depth behind this one. In Development, we try to peer review documentation during the code review process. If new content is written during code changes or additions, developers can add content reviewers, who have the ability to add notes with revisions, suggestions and questions. Once all parties are satisfied with the outcome, we close out the review in the system and the content is updated in the next code release. With peer review of documentation, you'll catch typos, inconsistencies and gaps. It always helps to have a second set of eyes before your content hits its users.

Writing better documentation really is that easy: Know your audience, be consistent, don't take your knowledge for granted, and use the peer review process. I put these four rules into practice every day as a technical writer at SoftLayer, and they make my life so much easier. By following these rules, you'll have better documentation for your users and will hopefully eliminate some of that pesky technical debt.

Go, and create better documentation!

-Sarah

September 20, 2013

Building a Mobile App with jQuery Mobile: The Foundation

Based on conversations I've had in the past, at least half of web developers I've met have admitted to cracking open an Objective-C book at some point in their careers with high hopes of learning mobile development ... After all, who wouldn't want to create "the next big thing" for a market growing so phenomenally every year? I count myself among that majority: I've been steadily learning Objective-C over the past year, dedicating a bit of time every day, and I feel like I still lack skill-set required to create an original, complex application. Wouldn't it be great if we web developers could finally get our shot in the App Store without having to unlearn and relearn the particulars of coding a mobile application?

Luckily for us: There is!

The rock stars over at jQuery have created a framework called jQuery Mobile that allows developers to create cross-platform, responsive applications on a HTML5-based jQuery foundation. The framework allows for touch and mouse event support, so you're able to publish across multiple platforms, including iOS, Android, Blackberry, Kindle, Nook and on and on and on. If you're able to create web applications with jQuery, you can now create an awesome cross-platform app. All you have to do is create an app as if it was a dynamic HTML5 web page, and jQuery takes care of the rest.

Let's go through a real-world example to show this functionality in action. The first thing we need to do is fill in the <head> content with all of our necessary jQuery libraries:

<!DOCTYPE html>
<html>
<head>
    <title>SoftLayer Hello World!</title>
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <link rel="stylesheet" href="http://code.jquery.com/mobile/1.3.2/jquery.mobile-1.3.2.min.css" />
    <script src="http://code.jquery.com/jquery-1.9.1.min.js"></script>
    <script src="http://code.jquery.com/mobile/1.3.2/jquery.mobile-1.3.2.min.js"></script>
</head>

Now let's create a framework for our simplistic app in the <body> section of our page:

<body>
    <div data-role="page">
        <div data-role="header">
            <h1>My App!</h1>
        </div>
 
        <div data-role="content">
            <p>This is my application! Pretty cool, huh?</p>
        </div>
 
        <div data-role="footer">
            <h1>Bottom Footer</h1>
        </div>
 
    </div>
</body>
</html>

Even novice web developers should recognize the structure above. You have a header, content and a footer just as you would in a regular web page, but we're letting jQuery apply some "native-like" styling to those sections with the data-role attributes. This is what our simple app looks like so far: jQuery Mobile App Screenshot #1

While it's not very fancy (yet), you see that the style is well suited to the iPhone I'm using to show it off. Let's spice it up a bit and add a navigation bar. Since we want the navigation to be a part of the header section of our app, let's add an unordered list there:

<div data-role="header">
    <h1>My App!</h1>
        <div data-role="navbar">
            <ul>
                <li><a href="#home" class="ui-btn-active" data-icon="home" data-theme="b">Home</a></li>
                <li><a href="#softlayer_cool_news" data-icon="grid" data-theme="b">SL Cool News!</a></li>
                <li><a href="#softlayer_cool_stuff" data-icon="star" data-theme="b">SL Cool Stuff!</a></li>
            </ul>
        </div>
    </div>

You'll notice again that it's not much different from regular HTML. We've created a navbar div with an unordered list of menu items we'd like to add to the header: Home, SL Cool News and SL Cool Stuff. Notice in the anchor tag of each that there's an attribute called data-icon which defines which graphical icon we want to represent the navigation item. Let's have a peek at what it looks like now: jQuery Mobile App Screenshot #2

Our app isn't doing a whole lot yet, but you can see from our screenshot that the pieces are starting to come together nicely. Because we're developing our mobile app as an HTML5 app first, we're able to make quick changes and see those changes in real time from our phone's browser. Once we get the functionality we want to into our app, we can use a tool such as PhoneGap or Cordova to package our app into a ready-to-use standalone iPhone app (provided you're enrolled in the Apple Development Program, of course), or we can leave the app as-is for a very nifty mobile browser application.

In my next few blogs, I plan to expand on this topic by showing you some of the amazingly easy (and impressive) functionality available in jQuery Mobile. In the meantime, go grab a copy of jQuery Mobile and start playing around with it!

-Cassandra

Pages

Subscribe to customer-service