executive-blog

January 15, 2014

Keep Spending Most Our Lives Livin' in a Gamer's Paradise

With apologies to Coolio, I couldn't resist adapting a line from the chorus of "Gangsta's Paradise" to be the title of this blog post. While I could come up with a full, cringe-worthy cloud computing version of the song (and perform it), I'll save myself the embarrassment and instead focus on why "Gamer's Paradise" came to mind in the first place. We announced some amazing stats about two gaming customers that use SoftLayer's cloud infrastructure to power popular online games, and I thought I'd share an interesting observation about that news.

More than 130 million gamers rely on SoftLayer infrastructure. SoftLayer is virtually invisible to those gamers. And that's why gaming companies love us.

When would a gamer care where a game is hosted? Simple: When gameplay is unavailable, lagging, or otherwise underperforming. Because we deliver peak cloud performance consistently for our gaming customers, we'll continue to live in the shadows of gamers' collective consciousness (while taking center stage in the minds of game producers and developers).

It's easy to get caught up in discussing the technical merits of our cloud hosting platform. Speeds and feeds provide great metrics for explaining our infrastructure, but every now and then, it's worthwhile to step back and look at the forest for the trees. Instead of talking about how bare metal resources consistently outperform their virtual server equivalents, let's take a look at why our gaming customers need as much server horsepower as we can provide:

As you can see, the games we're hosting for our customers are a little more resource-intensive than Tic-tac-toe and Pong. By leveraging SoftLayer bare metal infrastructure, gaming companies such as KUULUU and Multiplay can seamlessly support high definition gameplay in massive online environments for gamers around the world. When KUULUU launched their wildly popular LP Recharge Facebook game, they trusted our platform all the way from beta testing through launch, daily play, and updates. When Multiplay needed to support 25,000 new users in Battlefield 4, they spun up dedicated SoftLayer resources in less than four hours. If gamers expect a flawless user experience, you can imagine how attentive to infrastructure needs gaming companies are.

As more and more users sign on to play games online with Multiplay, KUULUU, and other gaming customers on our platform, we'll celebrate crossing even bigger (and more astounding) milestones like the 130 million mark we're sharing today. In the meantime, I'm going to go "check on our customers' servers" with a few hours of gameplay ... You know, for the good of our customers.

-@khazard

More Info: Multiplay and KUULUU Launch Games with SoftLayer, an IBM Company - Gaming companies flock to SoftLayer’s cloud, adding to 130 million players worldwide

Categories: 
January 10, 2014

Platform Improvements: VLAN Management

As director of product development, I'm tasked with providing SoftLayer customers greater usability and self-service tools on our platform. Often, that challenge involves finding, testing, and introducing new products, but a significant amount of my attention focuses on internal projects to tweak and improve our existing products and services. To give you an idea of what that kind of "behind the scenes" project looks like, I'll fill you in on a few of the updates we recently rolled out to improve the way customers interact with and manage their Virtual LANs (VLANs).

VLANs play a significant role in SoftLayer's platform. In the most basic sense, VLANs fool servers into thinking they're behind the same network switch. If you have multiple servers in the same data center and behind the same router, you could have them all on the same VLAN, and all traffic between the servers would be handled at the layer-2 network level. For customers with multi-tier applications, zones can be created to isolate specific servers into separate VLANs — database servers, app servers, and Web servers can all be isolated in their own security partitions to meet specific security and/or compliance requirements.

In the past, VLANs were all issued distinct numbers so that we could logically and consistently differentiate them from each other. That utilitarian approach has proven to be functional, but we noticed an opportunity to make the naming and management of VLANs more customer-friendly without losing that functionality. Because many of our customers operate large environments with multiple VLANs, they've had the challenge of remembering which servers live behind which VLAN number, and the process of organizing all of that information was pretty daunting. Imagine an old telephone switchboard with criss-crossing wires connecting several numbered jacks (and not connecting others). This is where our new improvements come in.

Customers now have the ability to name their VLANs, and we've made updates that increase visibility into the resources (servers, firewalls, gateways, and subnets) that reside inside specific VLANs. In practice, that means you can name your VLAN that houses database servers "DB" or label it to pinpoint a specific department inside your organization. When you need to find one of those VLANs, you can easily search for it by name and make changes to it easily.

VLAN List View

VLAN Naming

VLAN Detail Page

VLAN Naming

While these little improvements may seem simple, they make life much easier for IT departments and sysadmins with large, complex environments. If you don't need this kind of functionality, we don't throw it in your face, but if you do need it, we make it clear and easily accessible.

If you ever come across quirks in the portal that you'd like us to address, please let us know. We love making big waves by announcing new products and services, but we get as much (or more) joy from finding subtle ways to streamline and improve the way our customers interact with our platform.

-Bryce

December 16, 2013

Xplenty: Tech Partner Spotlight

We invite each of our featured SoftLayer Tech Marketplace Partners to contribute a guest post to the SoftLayer Blog, and this week, we're happy to welcome Yaniv Mor from Xplenty. Xplenty is a cloud-based code-free Hadoop as a Service platform that allows you to easily create data workflows, provision, monitor and scale clusters. Their goal is to eliminate the complexity of Hadoop to make it accessible and cost-effective for everyone.

Simplifying Hadoop

Apache Hadoop, open source software developed by Doug Cutting, is the most popular storage and processing platform for big data. Because Hadoop can accommodate structured data, semi-structured data, and unstructured data, it is the storage architecture of choice for some of the Internet's largest and most data-rich sites. Industry giants such as Google and Facebook have been using Hadoop for years to store and deliver information while gathering insights from customer behavior and internal business processes, and their obvious success with the platform has helped drive broad adoption and popularity all the way down to small-businesses and startups.

Specific use cases vary among industries, but similarities exist. Many companies leverage Hadoop to gather information about their clientele. With Hadoop, a company can process huge amounts of data to examine past and present behaviors, and with that information, customers can be presented personally-tailored recommendations, and the business can glean deep insights from the trends and outliers in its customer base. As a result, customers are more likely to make repeat purchases, and companies are able to predict trends and possible risks, allowing them to visualize and prepare for a number of business scenarios.

Another compelling use case for Hadoop is its ability to analyze and report on multi-faceted marketing and advertising campaigns. By drilling down into the guts of a campaign, users can see exactly what worked and what didn't. Marketers and advertisers can direct their resources to the campaigns that worked and let the ineffective ones fall by the wayside.

On the internal side, businesses are using Hadoop to better understand their own information. Data systems at financial companies use it to detect fraud anomalies by comparing transaction details. If you've ever made a credit card purchase in another state or country but the purchase didn't go through, your bank's system probably flagged the transaction for a representative to investigate. Other companies analyze data collected from their networks to monitor activity and diagnose bottlenecks and other issues with a negative impact.

The challenge with leveraging Hadoop's broad potential is that a company generally needs dedicated technical resources to allocate toward building and maintaining the solution — from manpower to financial to infrastructure. Hadoop is difficult to program and requires a very specific skill set that few possess. If a company doesn't have the personnel for the job, it will need to fork over some serious cash to get a system built and maintained. This can significantly hinder the progress of the data and business intelligence teams, and by default, the progress of the company. That's why we decided to create Xplenty.

Xplenty is a coding-free Hadoop-as-a-Service platform that allows data and BI users to process their big data stored on the SoftLayer cloud without having to acquire any special skills. What Xplenty does is remove the need to divert those precious resources from anything other than the business at hand. Xplenty's Hadoop-as-a-Service platform has a graphical user interface that enables the data and BI teams to build data flows without ever having to write a line of code. The benefit of this is twofold. First, the business intelligence analysts can quickly build data flows that would typically take weeks or more to program and debug, and data users can easily insert Xplenty into their data stack to handle processing needs. The second benefit is that since the IT department doesn't have to worry about doing any programming, they are able to tackle more pressing issues, bottlenecks are avoided, and life goes on without a hitch.

Xplenty was created specifically for the cloud, and SoftLayer is a major player in this space, so it was a natural fit for us to partner up to provide a SoftLayer-specific offering that will perform even better for customers already using SoftLayer infrastructure. We only work with providers with the best and most stable infrastructure, and SoftLayer is definitely at the top of the list.

If you want to try Hadoop on Xplenty, jump over to our SoftLayer sign up page, enter your details, and test drive the platform with a free 30-day trial!

- Yaniv Mor, Xplenty

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
December 11, 2013

2013 at SoftLayer: Year in Review

I'm going into my third year at SoftLayer and it feels like "déjà vu all over again" to quote Yogi Berra. The breakneck pace of innovation, cloud adoption and market consolidation — it only seems to be accelerating.

The BIG NEWS for SoftLayer was announced in July when we became part of IBM. Plenty has already been written about the significance of this acquisition but as our CEO, Lance Crosby, eloquently put it in an earlier blog, "customers and clients from both companies will benefit from a higher level of choice and a higher level of service from a single partner. More important, the real significance will come as we merge technology that we developed within the SoftLayer platform with the power and vision that drives SmartCloud and pioneer next-generation cloud services."

We view our acquisition as an interesting inflection point for the entire cloud computing industry. The acquisition has ramifications that go beyond IaaS market and include both PaaS and SaaS offerings. As the foundation for IBM's SmartCloud offerings, the one-stop-shop for an entire portfolio of cloud services will resonate for startups and large enterprises alike. We're also seeing a market that is rapidly consolidating and only those with global reach, deep pockets, and an established customer base will survive.

With IBM's support and resources, SoftLayer's plans for customer growth and geographic expansion have hit the fast track. News outlets are already abuzz with our plans to open a new data center facility in Hong Kong in the first quarter of next year, and that's just the tip of the iceberg for our extremely ambitious 2014 growth plans. Given the huge influx of opportunities our fellow IBMers are bringing to the table, we're going to be busy building data centers to stay one step ahead of customer demand.

The IBM acquisition generated enough news to devote an entire blog to, but because we've accomplished so much in 2013, I'd be remiss if I didn't create some space to highlight some of the other significant milestones we achieved this year. The primary reason SoftLayer was attractive to IBM in the first place was our history of innovation and technology development, and many of the product announcements and press releases we published this year tell that story.

Big Data and Analytics
Big data has been a key focus for SoftLayer in 2013. With the momentum we generated when we announced our partnership with MongoDB in December of 2012, we've been able to develop and roll out high-performance bare metal solution designers for Basho's Riak platfomr and Cloudera Hadoop. Server virtualization is a phenomenal boon to application servers, but disk-heavy, I/O-intensive operations can easily exhaust the resources of a virtualized environment. Because Riak and Hadoop are two of the most popular platforms for big data architectures, we teamed up with Basho and Cloudera to engineer server configurations that would streamline provisioning and supercharge the operations of their data-rich environments. From the newsroom in 2013:

  • SoftLayer announced the availability of Riak and Riak Enterprise on SoftLayer's IaaS platform. This partnership with Basho gives users the availability, fault tolerance, operational simplicity, and scalability of Riak combined with the flexibility, performance, and agility of SoftLayer's on-demand infrastructure.
  • SoftLayer announced a partnership with Cloudera to provide Hadoop big data solutions in a bare metal cloud environment. These on-demand solutions were designed with Cloudera best practices and are rapidly deployed with SoftLayer's easy-to-use solution designer tool.

Cutting-Edge Customers
Beyond the pure cloud innovation milestones we've hit this year, we've also seen a few key customers in vertical markets do their own innovating on our platform. These companies run the gamut from next generation e-commerce to interactive marketers and game developers who require high performance cloud infrastructure to build and scale the next leading application or game. Some of these game developers and cutting-edge tech companies are pretty amazing and we're glad we tapped into them to tell our story:

  • Asia's hottest tech companies looking to expand their reach globally are relying on SoftLayer's cloud infrastructure to break into new markets. Companies such as Distil Networks, Tiket.com, Simpli.fi, and 6waves are leveraging SoftLayer's Singapore data center to build out their customer base while enabling them to deliver their application or game to users across the region with extremely low latency.
  • In March, we announced that hundreds of the top mobile, PC and social games with more than 100 million active players, are now supported on SoftLayer's infrastructure platform. Gaming companies -- including Hothead Games, Geewa, Grinding Gear Games, Peak Games and Rumble Entertainment -- are flocking to SoftLayer because they can roll out virtual and bare-metal servers along with a suite of networking, security and storage solutions on demand and in real time.

Industry Recognition
SoftLayer's success and growth is a collective effort, however, it is nice to see our founder and CEO, Lance Crosby get some well-deserved recognition. In August, the Metroplex Technology Business Council (MTBC), the largest technology trade association in Texas, named him the winner of its Corporate CEO of the Year during the 13th Annual Tech Titans Awards ceremony.

The prestigious annual contest recognizes outstanding information technology companies and individuals in the North Texas area who have made significant contributions during the past year locally, as well as to the technology industry overall.

We're using the momentum we've continued building in 2013 to propel us into 2014. An upcoming milestone, just around the corner, will be our participation at Pulse 2014 in late February. At this conference we plan to unveil the ongoing integration efforts taking place between SoftLayer and IBM including how;

  • SoftLayer provides flexible, secure, cloud-based infrastructure for running the toughest and most mission critical workloads on the cloud;
  • SoftLayer is the foundation of IBM PaaS offerings for cloud-native application development and deployment;
  • SoftLayer is the platform for many of IBM SaaS offerings supporting mobile, social and analytic applications. IBM has a growing portfolio of roughly 110 SaaS applications.

Joining forces with IBM will have its challenges but the opportunities ahead looks amazing. We encourage you to watch this space for even more activity next year and join us at Pulse 2014 in Las Vegas.

-Andre

December 5, 2013

How to Report Abuse to SoftLayer

When you find hosted content that doesn't meet our acceptable use policy or another kind of inappropriate Internet activity originating from a SoftLayer service, your natural reaction might be to assume, "SoftLayer must know about it, and the fact that it's going on suggests that they're allowing that behavior." I know this because every now and then, I come across a "@SoftLayer is phishing my email. #spamming #fail" Tweet or a "How about u stop hacking my computer???" Facebook post. It's easy to see where these users are coming from, so my goal for this post is to provide the background you need to understand how behavior we don't condone — what we consider "abuse" of our services — might occur on our platform and what we do when we learn about it.

The most common types of abuse reported from the SoftLayer network are spam, copyright/trademark infringement, phishing and abusive traffic (DDoS attacks). All four are handled by the same abuse team, but they're all handled a bit differently, so it's important to break them down to understand the most efficient way to report them to our team. When you're on the receiving end of abuse, all you want is to make it stop. In the hurry to report the abusive behavior, it's easy to leave out some of the key information we need to address your concern, so let's take a look at each type of abuse and the best ways to report it to the SoftLayer team:

If You Get Spam

Spam is the most common type of abuse that gets reported to SoftLayer. Spam email is unsolicited, indiscriminate bulk messaging that is sent to you without your explicit consent. If you open your email client right now, your junk mail folder probably has a few examples of spam ... Someone is trying to sell you discount drugs or arrange a multi-million dollar inheritance transfer. In many ways, it's great that email is so easy to use and pervasive to our daily lives, but that ease of use also makes it an easy medium for spammers to abuse. Whether the spammer is a direct SoftLayer customer or a customer of one of our customers or somewhere further down the line of customers of customers, spam messages sent from a SoftLayer server will point back to us, and our abuse team is the group that will help stop it.

When you receive spam sent through SoftLayer, you should forward it directly to our abuse team (abuse@softlayer.com). Our team needs a full copy of the email with its headers intact. If you're not sure what that means, check out these instructions on how to retrieve your email headers. The email headers help tell the story about where exactly the messages are coming from and which customer we need to contact to stop the abuse.

If You See Phishing

Phishing abuse might be encountered via spam or you might encounter it on a website. Phishing is best described as someone masquerading as someone else to get your sensitive information, and it's one of the most serious issues our abuse team faces. Every second that a phishing/scam site is online, another user might be fooled into giving up his or her credit card or login information, and we don't want that to happen. Often, the fact that a site is not legitimate is clear relatively quickly, but as defenses against phishing have gotten better, so have the phishing sites. Take a minute to go through this phishing IQ test to get an idea of how difficult phishing can be to trace.

When it comes to reporting phishing, you should send the site's URL to the abuse team (also using abuse@softlayer.com). If you came across the phishing site via a spam email, be sure to include the email headers with your message. To help us filter the phishing complaint, please make sure to include the word "phishing" in your email's subject line. Our team will immediately investigate and follow up with the infringing customer internally.

If You Find Copyright or Trademark Infringement

If infringement of your copyright or trademark is happening on our platform, we want to know about it so we can have it taken down immediately. Copyright complaints and trademark complaints are handled slightly differently, so let's look at each type to better understand how they work.

Complaints of copyright infringement are processed by our abuse team based on the strict DMCA complaint laws. When I say "strict" in that sentence, I'm not saying it lightly ... Because DMCA complaints are legal issues, every requirement in the DMCA must be met in order for our team to act on the complaint. That might seem arbitrary, but we're not given much leeway when it comes to the DMCA process, and we have to be sticklers.

On our DMCA legal page, we outline the process of reporting a DMCA complaint of copyright infringement (primarily citing the statute 17 U.S.C. Section 512(c)(3)). If you don't completely understand what needs to be included in the claim, we recommend that you seek independent legal advice. It sounds harsh, but failure to submit copyright infringement notification as described above will result in no legal notice or action on behalf of SoftLayer. When you've made sure all required evidence has been included in your DMCA complaint, make sure "copyright" or "DMCA" are included in your subject line and submit the complaint to copyright@softlayer.com.

Trademark complaints do not have the same requirements as copyright complaints, but the more information you can provide in your complaint, the easier it will be for our customer to locate and remove the offending material. If you encounter unauthorized use of your registered trademark on our network, please email copyright@softlayer.com with details — the exact location of the infringing content, your trademark registration information, etc. — along with an explanation that this trademark usage is unauthorized and should be removed. In your email, please add the word "trademark" to the subject line to help us filter and prioritize your complaint.

If You See Abusive Traffic

Spam, phishing and copyright infringement are relatively straightforward when it comes to finding and reporting abuse, but sometimes the abuse isn't as visible and tangible (though the effect usually is). If a SoftLayer server is sending abusive traffic to your site, we want to know about it as quickly as possible. Whether that behavior is part of a Denial of Service (DoS) attack or is just scanning ports to possibly attack later, it's important that you give us details so we can prevent any further activity.

To report this type of abuse, send a snippet from your log file including at least 10 lines of logs that show attempts to break into or overload your server. Here's a quick reference to where you can find the relevant logs to send:

  • Email Spam - Send Mail Logs:
    • /var/log/maillog
    • /usr/local/psa/var/log/maillog
  • Brute Force Attacks - Send SSH Logs:
    • /var/log/messages
    • /var/log/secure

Like spam and phishing reports, abusive traffic complaints should be sent to abuse@softlayer.com with a quick explanation of what is happening and any other details you can provide. When you submit a complaint about abusive traffic, make sure your message's subject line reflects the type of issue ("DDoS attack," "brute force attempts," etc.) so our team can investigate your report even quicker.

As I mentioned at the start of this post, these are just four types of abusive behavior that our abuse department addresses on a daily basis. Our Acceptable Use Policy (AUP) outlines what can and cannot be hosted using SoftLayer services, and the process of reporting other types of abuse is generally the same as what you see in the four examples I mentioned above ... Send a clear, concise report to abuse@softlayer.com with key words about the type of violation in the message's subject line. When our team is able to look into your complaint and find the evidence they need to take action, they do so quickly.

I can't wrap up this blog of tips without mentioning the "Tips from the Abuse Department" blog Jennifer Groves wrote about reporting abuse ... It touches on some of the same ideas as this post, and it also provides a little more perspective from behind the lines of the abuse department. As the social media gal, I don't handle abuse on a day-to-day basis, but I do help people dealing with abuse issues, and I know a simple guide like this will be of value.

If an abuse-related issue persists and you don't feel like anything has been fixed, double-check that you've included all the necessary information and evidence in your correspondence to the abuse team. In most cases, you will not receive a response from the abuse team, but that doesn't mean they aren't taking action. The abuse@ and copyright@ email aliases function as notification systems for our abuse teams, and they correspond with the infringing customers internally when a complaint is submitted. Given the fact that hundreds of users may report the same abusive behavior at the same time, responding directly to each message would slow down the process of actually resolving the issue (which is the priority).

If everything was included in your initial correspondence with the abuse team but you still don't notice a change in the abusive behavior, you can always follow up with our social media team at twitter@softlayer.com, and we'll do everything we can to help.

-Rachel

November 19, 2013

Protect Your Data: Configure EVault for Server Backups

In "The Tenth Anniversary" episode of "Everybody Loves Raymond," Raymond accidentally records the Super Bowl over his wedding video. He hilariously tries to compensate for his gaffe by renewing his wedding vows so he can make a new tape for his wife Debra. If life imitates art, it's worth considering what would happen if that tape held your business data. It would be disaster!

While it's unlikely that one of your sysadmins will accidentally record the Super Bowl over the data in your database server cluster, data loss can occur in a number of ways. If your business data is not protected and backed up, it's unlikely that you'll have a neat and tidy sitcom episode resolution. Luckily, SoftLayer provides simple, inexpensive backup capabilities with software such as EVault, so you shouldn't ever be worried about anyone pulling a Raymond on your data.

The following quick, four-step process walks you through how to protect and back up your data by subscribing to SoftLayer's EVault Backup client. This software enables you to design and set your backup schedule, protecting your business from unexpected costs because of accidental deletions, viruses, and other disasters. To follow along on your own servers, your computing instances or bare metal servers need to be provisioned, and you need to have root or administrator level access to those servers. For the sake of brevity, I'll be using a Linux operating system in this guide, but if you're running Windows, the process, in general, is no different.

Step 1 - Order EVault Backup for the server or computing instance

  1. Log into the SoftLayer Customer Portal and select the server(s) that needs storage services from the device list.
  2. Scroll down to the Storage section. Select the Add (or Modify) link located on the right hand corner of the EVault record to place an order for an EVault Backup client subscription.
  3. On the EVault ordering screen, select either Local or Remote Data Center and the desired amount of storage. Agree to the terms and conditions and click the Order EVault button to place your EVault storage order.
  4. The order is typically provisioned in 5 minutes or less and the system creates a user and password for the new instance of EVault.
  5. Click Services→Storage→EVault and expand the EVAULT link to make note of the user credentials, which will be used in Step 3.

Step 2 - Download the EVault agent on the server or computing instance

  1. SSH into the server or computing instance and run the following command:
    # wget –N http://downloads.service.softlayer.com/evault/evault_manual.sh

Step 3 - Register the server or computing instance with EVault in order to run back up and restore jobs

  1. From the command prompt on the server or compute instance run the following command to register it with EVault:
    ~]# sh ./evault_manual.sh
  2. In the ensuing prompts, enter the credentials that were noted Step 1.5 and use ev-webcc01.service.softlayer.com for the web-based agent console address.

    Note: In the event the agent fails to register with EVault, you can quickly register the agent manually by running ~]#<Installation directory>/register

Once you've made it to this point, you're ready to run backup and restore jobs.

Step 4 – Login into EVault console with WebCCLogin

  1. From the SoftLayer Customer Portal, click Services→Storage→EVault.
  2. Expand the server or compute instance to which EVault Backup is attached. In the right-hand corner of the server entry you will find a link to WebCCLogin.
  3. Click the WebCCLogin link for the EVault Web CentralControl screen. Type in the credentials from Step 1.5 and you’ll be taken to the EVault Backup and Restore interface.
  4. You are now ready to run your backup and restore jobs!

Check your backups often to confirm that they're being created when, where, and how you want them to be created. To prepare for any possible disaster recovery scenarios, schedule periodic tests of your backups: Restore the most recent backup of your production server to an internal server. That way, if someone pulls a Raymond on your server(s), you'll be able to get all of your data back online quickly. If you're interested in learning more, visit the Evault Backup page on KnowledgeLayer.

-Vinayak Harnoor

Vinayak Harnoor is a Technical Architect with the IBM Global Technology Services (GTS) Global Cloud Ecosystem team.

November 14, 2013

Enhancing Usability by Building User Confidence

Consider your experiences with web applications, and see if this scenario seems familiar: Your electricity bill has some incorrect charges on it. Fearing that you will have to spend 40 minutes on hold if you call in, you find that the electric company website has a support center where you can submit billing issues and questions; you are saved! You carefully fill out the form with your sixteen-digit account number and detailed description of the incorrect charges. You read it over and click the submit button. Your page goes blank for a couple of seconds, the form comes back with a note saying you typed in your phone number incorrectly, and the detailed description you spent eleven minutes meticulously writing is gone.

Web applications have gotten much better at preventing these kinds of user experiences over the past few years, and I'm sure that none of your applications have this problem (if they do, fix it right now!), but "usability" is more than just handling errors gracefully. Having a seamless process is only half the battle when it comes to giving your users a great experience with your application. The other half of the battle is a much more subjective: Your users need to feel confident in their success every step of the way. By keeping a few general guidelines in mind, you can instill confidence in your users so that they feel positive about your application from start to finish with whatever they are trying to accomplish.

1. Keep the user in a familiar context.

As the user in our electric company support application example, let's assume the process works and does not lose any of my information. I have to have faith that the application is going to do what I expect it to do when the page refreshes. Faith and unfamiliar technology do not exactly go hand in hand. Instead of having the form submit with a page refresh, the site's developers could introduce a progress wheel or other another kind of indicator that shows the data is being submitted while the content is still visible. If detailed content never goes away during the submission process, I'm confident that I still have access to my information.

Another example of the same principle is the use of modal windows. Modal windows are presented on top of a previous page, so users have a clear way of going back if they get confused or decide they navigated to the wrong place. By providing this new content on top of a familiar page, users are much less likely to feel disoriented if they get stuck or lost, and they will feel more confident when they're using the application.

2. Reassure the user with immediate feedback.

By communicating frequently and clearly, users are reassured, and they are much less likely to become anxious. Users want to see their actions get a response from your application. In our electric company support application example, imagine how much better the experience would be if a small blurb was displayed in red next to the phone number text box when I typed in my phone number in the wrong format. The immediate feedback would pinpoint the problem when it is easy to correct, and it would make me confident that when the phone number is updated, the application will continue to work as expected.

3. Provide warnings or extra information for dangerous or complicated operations.

When users are new to an application, they are not always sure which actions will have negative consequences. This is another great opportunity for communication. Providing notices or alerts for important or risky operations can offer a good dose of hesitation for new users who aren't prepared. Effective warnings or notices will tell the user when they will want to perform this action or what the negative consequences might be, so the user can make an informed decision. Users are confident with informed decisions because a lack information causes anxiety.

I learned how to implement this tip when I designed a wizard system for a previous employer that standardized how the company's application would walked users through any step-by-step process. My team decided early on to standardize a review step at the end of any implemented wizard. This was an extra step that every user had to go through for every wizard in the application, but it made all of the related processes much more usable and communicative. This extra information gave the users a chance to see the totality of the operation they were performing, and it gave them a chance to correct any mistakes. Implementing this tip resulted in users who were fully informed and confident throught the process of very complicated operations.

4. Do not assume your users know your terminology, and don't expect them to learn it.

Every organization has its own language. I have never encountered an exception to this rule. It cannot be helped! Inside your organization, you come up with a defined vocabulary for referencing the topics you have to work with every day, but your users won't necessarily understand the terminology you use internally. Some of your ardent users pick up on your language through osmosis, but the vast majority of users just get confused when they encounter terms they are not familiar with.

When interacting with users, refrain from using any of your internal language, and strictly adhere to a universally-accepted vocabulary. In many cases, you need shorthand to describe complex concepts that users will already understand. In this situation, always use universal or industry-wide vocabulary if it is available.

This practice can be challenging and will often require extra work. Let's say you have a page in your application dealing with "display devices," which could either be TVs or monitors. All of your employees talk about display devices because to your organization, they are essentially the same thing. The technology of your application handles all display devices in exactly the same way, so as good software designers you have this abstracted (or condensed for non-technical people) so that you have the least amount of code possible. The easiest route is to just have a page that talks about display devices. The challenge with that approach is that your users understand what monitors and TVs are, but they don't necessarily think of those as display devices.

If that's the case, you should use the words "monitors" and "TVs" when you're talking about display devices externally. This can be difficult, and it requires a lot of discipline, but when you provide familiar terminology, users won't be disoriented by basic terms. To make users more comfortable, speak to them in their language. Don't expect them to learn yours, because most of them won't.

When you look at usability through the subjective lens of user confidence, you'll find opportunities to enhance your user experience ... even when you aren't necessarily fixing anything that's broken. While it's difficult to quantify, confidence is at the heart of what makes people like or dislike any product or tool. Pay careful attention to the level of confidence your users have throughout your application, and your application can reach new heights.

-Tony

November 11, 2013

Sysadmin Tips and Tricks - Using the ‘for’ Loop in Bash

Ever have a bunch of files to rename or a large set of files to move to different directories? Ever find yourself copy/pasting nearly identical commands a few hundred times to get a job done? A system administrator's life is full of tedious tasks that can be eliminated or simplified with the proper tools. That's right ... Those tedious tasks don't have to be executed manually! I'd like to introduce you to one of the simplest tools to automate time-consuming repetitive processes in Bash — the for loop.

Whether you have been programming for a few weeks or a few decades, you should be able to quickly pick up on how the for loop works and what it can do for you. To get started, let's take a look at a few simple examples of what the for loop looks like. For these exercises, it's always best to use a temporary directory while you're learning and practicing for loops. The command is very powerful, and we wouldn't want you to damage your system while you're still learning.

Here is our temporary directory:

rasto@lmlatham:~/temp$ ls -la
total 8
drwxr-xr-x 2 rasto rasto 4096 Oct 23 15:54 .
drwxr-xr-x 34 rasto rasto 4096 Oct 23 16:00 ..
rasto@lmlatham:~/temp$

We want to fill the directory with files, so let's use the for loop:

rasto@lmlatham:~/temp$ for cats_are_cool in {a..z}; do touch $cats_are_cool; done;
rasto@lmlatham:~/temp$

Note: This should be typed all in one line.

Here's the result:

rasto@lmlatham:~/temp$ ls -l
total 0
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 a
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 b
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 c
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 d
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 e
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 f
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 g
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 h
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 i
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 j
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 k
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 l
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 m
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 n
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 o
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 p
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 q
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 r
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 s
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 t
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 u
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 v
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 w
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 x
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 y
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 z
rasto@lmlatham:~/temp$

How did that simple command populate the directory with all of the letters in the alphabet? Let's break it down.

for cats_are_cool in {a..z}

The for is the command we are running, which is built into the Bash shell. cats_are_cool is a variable we are declaring. The specific name of the variable can be whatever you want it to be. Traditionally people often use f, but the variable we're using is a little more fun. Hereafter, our variable will be referred to as $cats_are_cool (or $f if you used the more boring "f" variable). Aside: You may be familiar with declaring a variable without the $ sign, and then using the $sign to invoke it when declaring environment variables.

When our command is executed, the variable we declared in {a..z}, will assume each of the values of a to z. Next, we use the semicolon to indicate we are done with the first phase of our for loop. The next part starts with do, which say for each of a–z, do <some thing>. In this case, we are creating files by touching them via touch $cats_are_cool. The first time through the loop, the command creates a, the second time through b and so forth. We complete that command with a semicolon, then we declare we are finished with the loop with "done".

This might be a great time to experiment with the command above, making small changes, if you wish. Let's do a little more. I just realized that I made a mistake. I meant to give the files a .txt extension. This is how we'd make that happen:

for dogs_are_ok_too in {a..z}; do mv $dogs_are_ok_too $dogs_are_ok_too.txt; done;
Note: It would be perfectly okay to re-use $cats_are_cool here. The variables are not persistent between executions.

As you can see, I updated the command so that a would be renamed a.txt, b would be renamed b.txt and so forth. Why would I want to do that manually, 26 times? If we check our directory, we see that everything was completed in that single command:

rasto@lmlatham:~/temp$ ls -l
total 0
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 a.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 b.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 c.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 d.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 e.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 f.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 g.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 h.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 i.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 j.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 k.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 l.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 m.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 n.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 o.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 p.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 q.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 r.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 s.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 t.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 u.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 v.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 w.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 x.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 y.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 z.txt
rasto@lmlatham:~/temp$

Now we have files, but we don't want them to be empty. Let's put some text in them:

for f in `ls`; do cat /etc/passwd > $f; done

Note the backticks around ls. In Bash, backticks mean, "execute this and return the results," so it's like you executed ls and fed the results to the for loop! Next, cat /etc/passwd is redirecting the results to $f, in filenames a.txt, b.txt, etc. Still with me?

So now I've got a bunch of files with copies of /etc/passwd in them. What if I never wanted files for a, g, or h? First, I'd get a list of just the files I want to get rid of:

rasto@lmlatham:~/temp$ ls | egrep 'a|g|h'
a.txt
g.txt
h.txt

Then I could plug that command into the for loop (using backticks again) and do the removal of those files:

for f in `ls | egrep 'a|g|h'`; do rm $f; done

I know these examples don't seem very complex, but they give you a great first-look at the kind of functionality made possible by the for loop in Bash. Give it a whirl. Once you start smartly incorporating it in your day-to-day operations, you'll save yourself massive amounts of time ... Especially when you come across thousands or tens of thousands of very similar tasks.

Don't do work a computer should do!

-Lee

November 1, 2013

Paving the Way for the DevOps Revolution

The traditional approach to software development has been very linear: Your development team codes a release and sends it over to a team of quality engineers to be tested. When everything looks good, the code gets passed over to IT operations to be released into production. Each of these teams operates within its own silo and makes changes independent of the other groups, and at any point in the process, it's possible a release can get kicked back to the starting line. With the meteoric rise of agile development — a development philosophy geared toward iterative and incremental code releases — that old waterfall-type development approach is being abandoned in favor of a DevOps approach.

DevOps — a fully integrated development and operations approach — streamlines the software development process in an agile development environment by consolidating development, testing and release responsibilities into one cohesive team. This way, ideas, features and other developments can be released very quickly and iteratively to respond to changing and growing market needs, avoiding the delays of long, drawn-out and timed dev releases.

To help you visualize the difference between the traditional approach and the DevOps approach, take a look at these two pictures:

Traditional Waterfall Development
SoftLayer DevOps Blog

DevOps
SoftLayer DevOps Blog

Unfortunately, many businesses struggle to adopt the DevOps approach because they simply update their org chart by merging their traditional teams, but their development philosophy doesn't change at the same time. As a result, I've encountered a lot of companies who have been jaded by previous attempts to move to a DevOps model, and I'm not alone. There is a significant need in the marketplace for some good old fashioned DevOps expertise.

A couple months ago, my friend Raj Bhargava pinged me with a phenomenal idea to put on a DevOps "un-conference" in Boulder, Colorado, to address the obvious need he's observed for DevOps education and best practices. Raj is a serial, multiple-exit entrepreneur from Boulder, and he is the co-founder and CEO of a DevOps-focused startup there called JumpCloud. When he asked if I would like to co-chair the event and have SoftLayer as a headline sponsor alongside JumpCloud, the answer was a quick and easy "Yes!"

Sure, there have been other DevOps-related conferences around the world, but ours was designed to be different from the outset. As strange as it may sound, half of the conference intentionally occurred outside of the conference: One of our highest priorities was to strike up conversations between the participants before, during and after the event. If we're putting on a conference to encourage a collaborative development approach, it would be counterproductive for us to use a top-down, linear approach to engaging the attendees, right?

I'm happy to report that this inaugural attempt of our untested concept was an amazing success. We kept the event private for our first run at the concept, but the event was bursting at the seams with brilliant developers and tech influencers. Brad Feld and our friends from the Foundry Group invited all of their portfolio CEO's and CTO's. David Cohen, co-founder of Techstars and head honcho at Bullet Time Ventures did the same. JumpCloud and SoftLayer helped round out the attendee list with a few of our most innovative partners as well as a few of technologists from within our own organizations. It was an incredible mix of super-smart tech pros, business leaders and VC's from all over the world.

With such a diverse group of attendees, the conversations at the event were engaging, energizing and profound. We discussed everything from how startups should incorporate automation into their business plans at the outset to how the practice of DevOps evolves as companies scale quickly. At the end of the day, we brought all of those theoretical discussions back down to the ground by sharing case studies of real companies that have had unbelievable success in incorporating DevOps into their businesses. I had the honor of wrapping up the event as moderator of a panel with Jon Prall from Sendgrid, Scott Engstrom from Gnip and Richard Miller of Mocavo, and I couldn't have been happier with the response.

I'd like to send a big thanks to everyone who participated, especially our cosponsors — JumpCloud, VictorOps, Authentic8, DH Capital, SendGrid, Cooley, Pivot Desk, SVP and Pantheon.

I'm looking forward to opening this up to the world next year!

-@PaulFord

October 24, 2013

Why Hybrid? Why Now?

As off-premise cloud computing adoption continues to grow in a non-linear fashion, a growing number of businesses running in-house IT environments are debating whether they should get on board as well. If you've been part of any of those conversations, you've tried to balance the hype with the most significant questions for your business: "How do we know if our company is ready to try cloud resources? And if we're ready, how do we actually get started?"

Your company is cloud-ready as soon as you understand and accept the ramifications of remote resources and scaling in the cloud model, and it doesn't have to be an "all-in" decision. If you need certain pieces of your infrastructure to reside in-house, you can start evaluating the cloud with workloads that don't have to be hosted internally. The traditional IT term for this approach is "hybrid," but that term might cause confusion these days.

In the simplest sense, a hybrid model is one in which a workload is handled by one or more non-heterogeneous elements. In the traditional IT sense, those non-heterogeneous elements are two distinct operating environments (on-prem and off-prem). In SoftLayer's world, a hybrid environment leverages different heterogeneous elements: Bare metal and virtual server instances, delivered in the cloud.

Figure 1: Traditional Hybrid - On-Premise to Cloud (Through VPN, SSL or Open Communications)

Traditional Hybrid

Figure 2: SoftLayer's Hybrid - Dedicated + Virtual

SoftLayer Hybrid

Because SoftLayer's "hybrid" and traditional IT's "hybrid" are so different, it's easy to understand the confusion in the marketplace: If a hybrid environment is generally understood to involve the connection of on-premise infrastructure to cloud resources, SoftLayer's definition seems contrarian. Actually, the use of the term is a lot more similar than I expected. In a traditional hosting environment, most businesses think in terms of bare metal (dedicated) servers, and when those businesses move "to the cloud," they're generally thinking in terms of virtualized server instances. So SoftLayer's definition of a hybrid environment is very consistent with the market definition ... It's just all hosted off-premise.

The ability to have dedicated resources intermixed with virtual resources means that workloads from on-premise hypervisors that require native or near-native performance can be moved immediately. And because those workloads don't have to be powered by in-house servers, a company's IT infrastructure moves a CapEx to an OpEx model. In the past, adopting infrastructure as a service (IaaS) involved shoehorning workloads into whichever virtual resource closest matched an existing environment, but those days are gone. Now, on-premise resources can be replicated (and upgraded) on demand in a single off-premise environment, leveraging a mix of virtual and dedicated resources.

SoftLayer's environment simplifies the process for businesses looking to move IT infrastructure off-premise. Those businesses can start by leveraging virtual server instances in a cloud environment while maintaining the in-house resources for certain workloads, and when those in-house resources reach the end of their usable life (or need an upgrade), the businesses can shift those workloads onto bare metal servers in the same cloud environment as their virtual server instances.

The real-world applications are pretty obvious: Your company is considering moving part of a workload to cloud in order to handle peak season loads at the end of the year. You've contemplated transitioning parts of your environment to the cloud, but you've convinced yourself that shared resource pools are too inefficient and full of noisy neighbor problems, so you'd never be able to move your core infrastructure to the same environment. Furthering the dilemma, you have to capitalize on the assets you already have that are still of use to the company.

You finally have the flexibility to slowly transition your environment to a scalable, flexible cloud environment without sacrificing. While the initial setup phases for a hybrid environment may seem arduous, Rome wasn't built in a day, so you shouldn't feel pressure to rush the construction of your IT environment. Here are a few key points to consider when adopting a hybrid model that will make life easier:

  • Keep it simple. Don't overcomplicate your environment. Keep networks, topologies and methodologies simple, and they'll be much more manageable and scalable.
  • Keep it secure. Simple, robust security principles will reduce your deployment timeframe and reduce attack points.
  • Keep it sane. Hybrid mixes the best of both worlds, so chose the best assets to move over. "Best" does not necessarily mean "easiest" or "cheapest" workload, but it doesn't exclude those workloads either.

With this in mind, you're ready to take on a hybrid approach for your infrastructure. There's no certification for when your company finally becomes a "cloud company." The moment you start leveraging off-premise resources, you've got a hybrid environment, and you can adjust your mix of on-premise, off-premise, virtual and bare metal resources as your business needs change and evolve.

-Jeff Klink

Jeff Klink is a senior technical staff member (STSM) with IBM Canada.

Pages

Subscribe to executive-blog