Development Posts

November 19, 2015

SoftLayer and Koding join forces to power a Global Virtual Hackathon

This guest blog post is written by Cole Fox, director of partnerships at Koding.

Koding is excited to partner with SoftLayer on its upcoming Global Virtual Hackathon, happening December 12–13, 2015. The event builds on last year’s Hackathon, where more than 60,000 developers participated from all over the world. The winners took home over $35,000 in prizes! This year, we’ve upped the ante to make the event even larger than the last time: the winner will take home a $100,000 grand prize.

“We are working with Koding for this virtual hackathon as part of our commitment to promote open source technology and support the talented community of developers who are dispersed all over the globe,” said Sandy Carter, general manager of Cloud Ecosystem and Developers at IBM. “Cloud-based open source development platforms like Koding make it easier to get software projects started, and hackathons are a great place to show how these kinds of platforms make software development easier and more fun.”

Why a virtual hackathon?
Hackathons are awesome. They allow developers to solve problems in a very short amount of time. The challenge with traditional hackathons is that they require you to be physically present in a room. With more and more of our lives moving online, why be tied to a physical location to solve problems? Virtual hackathons allow talented individuals from all over the world to participate, collaborate, and showcase their skills, regardless of their physical location. Our Global Virtual Hackathon levels the playing field.

Who won last year?
Educational games, especially those that teach programming, were popular to build—and a few actually won! Want to see what the winners built? Click here to check out a fun yet effective game teaching students to program. Learn more about the team of developers and see their code here. Last year, nine winners across three categories took home a prize. To see a list of last year’s winners, see the blog post here.

Tips to be successful and win this year
Here’s some motivation for you: the grand prize is $100,000. (That’s seed capital for your startup idea!)

So how do you win? First and foremost, apply now! Then talk to some friends and maybe even team up. You can also use Koding to find teammates once you’re accepted. Teammates aren’t a requirement but can definitely make for a fun experience and improve your chances of making something amazing.

Once you’re in, get excited! And be sure to start thinking about what you want to build around this year’s themes.

And the 2015 themes are…
Ready to build something and take home $100,000? Here are this year’s themes:

  • Data Visualization
    Data is everywhere, but how can we make sense of it? Infographics and analytics can bring important information to light that wasn’t previously accessible when stuck in a spreadsheet or database. We challenge you to use some of the tools out there to help articulate some insights.
  • Enterprise Productivity
    The workplace can always be improved and companies are willing to pay a lot of money for great solutions. Build an application that helps employees do their jobs better and you could win big.
  • Educational Games
    Last year’s winning team, WunderBruders, created an educational game. But games aren’t just for children. Studies have shown that games not only improve motor skills, but they are also a great way to learn something new.

Wait a second. What is Koding anyway?
In short, Koding is a developer environment as a service. The Koding platform provides you with what you need to move your software development to the cloud. Koding’s cloud-based software development service provides businesses with the ability to formulate the most productive, collaborative, and efficient development workflows. Businesses, both small and large, face three common challenges: on-boarding new team members, workflow efficiency, and knowledge retention. These pain points impact companies across all industries, but for companies involved in software development, these are often the most expensive and critical problems that continue to remain unresolved. Koding was built to tackle these inefficiencies head on. Learn more about Koding for Teams.

Can I use my SoftLayer virtual servers with Koding?
Koding’s technical architecture is very flexible. If you have a SoftLayer virtual server, you can easily connect it to your Koding account. The feature is described in detail here.

Think you can hack it? APPLY NOW!

-Cole Fox

November 2, 2015

The multitenant problem solver is here: VMWare 6 NSX on SoftLayer

We’re very excited to tell you about what’s coming down the pike here at SoftLayer: VMWare NSX 6! This is something that I’ve personally been anticipating for a while now, because it solves so many issues that are confronted on the multitenant platform. Here’s a diagram to explain exactly how it works:

As you can see, it uses the SoftLayer network, the underlay network and fabric, and uses NSX as the overlay network to create the SDN (Software Defined Network).

What is it?
VMware NSX is a virtual networking and security software product from VMware's vCloud Networking and Security (vCNS) and Nicira Network Virtualization Platform (NVP). NSX software-defined networking is part of VMware's software-defined data center concept, which offers cloud computing on VMware virtualization technologies. VMware's stated goal with NSX is to provision virtual networking environments without command line interfaces or other direct administrator intervention. Network virtualization abstracts network operations from the underlying hardware onto a distributed virtualization layer, much like server virtualization does for processing power and operating systems. VMware vCNS (formerly called vShield) virtualizes L4-L7 of the network. Nicira's NVP virtualizes the network fabric, L2 and L3. VMware says that NSX will expose logical firewalls, switches, routers, ports, and other networking elements to allow virtual networking among vendor-agnostic hypervisors, cloud management systems, and associated network hardware. It also will support external networking and security ecosystem services.

How does it work?
NSX network virtualization is an architecture that enables the full potential of a software-defined data center (SDDC), making it possible to create and run entire networks in parallel on top of existing network hardware. This results in faster deployment of workloads and greater agility in creating dynamic data centers.

This means you can create a flexible pool of network capacity that can be allocated, utilized, and repurposed on demand. You can decouple the network from underlying hardware and apply virtualization principles to network infrastructure. You’re able to deploy networks in software that are fully isolated from each other, as well as from other changes in the data center. NSX reproduces the entire networking environment in software, including L2, L3 and L4–L7 network services within each virtual network. NSX offers a distributed logical architecture for L2–L7 services, provisioning them programmatically when virtual machines are deployed and moving them with the virtual machines. With NSX, you already have the physical network resources you need for a next-generation data center.

What are some major features?
NSX brings an SDDC approach to network security. Its network virtualization capabilities enable the three key functions of micro-segmentation: isolation (no communication across unrelated networks), segmentation (controlled communication within a network), and security with advanced services (tight integration with leading third-party security solutions).

The key benefits of micro-segmentation include:

  1. Network security inside the data center: Fine-grained policies enable firewall controls and advanced security down to the level of the virtual NIC.
  2. Automated security for speed and agility in the data center: Security policies are automatically applied when a virtual machine spins up, moved when a virtual machine is migrated, and removed when a virtual machine is deprovisioned—eliminating the problem of stale firewall rules.
  3. Integration with the industry’s leading security products: NSX provides a platform for technology partners to bring their solutions to the SDDC. With NSX security tags, these solutions can adapt to constantly changing conditions in the data center for enhanced security.

As you can see, there are lots of great features and benefits for our customers.

You can find more great resources about NSX on SoftLayer here. Make sure to keep your eyes peeled for more great NSX news!


October 20, 2015

What’s in a hypervisor? More than you think

Virtualization has always been a key tenet of enabling cloud-computing services. From the get-go, SoftLayer has offered a variety of options, including Citrix XenServer, Microsoft Hyper-V, and Parallels Cloud Server, just to name a few. It’s all about enabling choice.

But what about VMware—the company that practically pioneered virtualization, making it commonplace?

Well, we have some news to share. SoftLayer has always supported VMware ESX and ESXi—your basic, run-of-the mill hypervisor—but now we’re enabling enterprise customers to run VMware vSphere on our bare metal servers.

This collaboration is significant for SoftLayer and IBM because it gives our customers tremendous flexibility and transparency when moving workloads into the public cloud. Enterprises already familiar with VMware can easily extend their existing on-premises VMware infrastructure into the IBM Cloud with simplified, monthly pricing. This makes transitioning into a hybrid model easier because it results in greater workload mobility and application continuity.

But the real magic happens when you couple our bare metal performance with VMware vSphere. Users can complete live workload migrations between data centers across continents. Users can easily move and implement enterprise applications and disaster recovery solutions across our global network of cloud data centers—with just a few clicks of a mouse. Take a look at this demo and judge for yourself.

What’s in a hypervisor? For some, it’s an on-ramp to the cloud and a way to make hybrid computing a reality. When you pair the flexibility of VMware with our bare metal servers, users get a combination that’s hard to beat.

We’re innovating to help companies make the transition to hybrid cloud, one hypervisor at a time. For more details, visit

-Jack Beech, VP of Business Development

September 2, 2015

Backup and Restore in a Cloud and DevOps World

Virtualization has brought many improvements to the compute infrastructure, including snapshots and live migration1. When an infrastructure moves to the cloud, these options often become a client’s primary backup strategy. While snapshots and live migration are also part of a successful strategy, backing up on the cloud may need additional tools.

First, a basic question: Why do we take backups? They’re taken to recover from

  • The loss of an entire machine
  • Partially corrupted files
  • A complete data loss (either through hardware or human error)

While losing an entire machine is frightening, corrupted files or data loss are the more common reasons for data backups.

Snapshots are useful when the snapshot and restore occur in close proximity to each other, e.g., when you’re migrating middleware or an operating system and want to fall back quickly if something goes wrong. If you need to restore after extensive changes (hardware or data), a snapshot isn’t an adequate resource. The restore may require restoring to a new machine, selecting files to be restored, and moving data back to the original machine.

So if a snapshot isn’t the silver bullet for backing up in the cloud, what are the effective backup alternatives? The solution needs to handle a full system loss, partial data loss, or corruption, and ideally work for both virtualized and non-virtualized environments.

What to back up

There are three types of files that you’ll want to consider when backing up an active machine’s disks:

  • Binary files: Changed by operating system and middleware updates; can be easily stored and recovered.
  • Configuration files: Defined by how the binary files are connected, configured, and what data is accessible to them.
  • Data files: Generated by users and unrecoverable if not backed up. Data files are the most precious part of the disk content and losing them may result in a financial impact on the client’s business.

Keep in mind when determining your backup strategy that each file type has a different change rate—data files change faster than configuration files, which are more fluid than binary files. So, what are your options for backing up and restoring each type of file?

Binary files
In the case of a system failure, DevOps advocates (see Phoenix Servers from Martin Fowler) propose getting a new machine, which all cloud providers can automatically provision, including middleware. Automated provisioning processes are available for both bare metal and virtual machines.

Note that most Open Source products only require an Internet connection and a single command line for installation, while commercial products can be provisioned through automation.

Configuration files
Cloud-centric operations have a distinct advantage over traditional operations when it comes to backing up configuration files. With traditional operations, each element is configured manually, which has several drawbacks such as being time-consuming and error-prone. Cloud-centric operations, or DevOps, treat each configuration as code, which allows an environment to be built from a source configuration via automated tools and procedures. Tools such as Chef, Puppet, Ansible, and SaltStack show their power with central configuration repositories that are used to drive the composition of an environment. A central repository works well with another component of automated provisioning—changing the IP address and hostname.

You have limited control of how the cloud will allocate resources, so you need an automated method to collect the information and apply it to all the machines being provisioned.

In a cloud context, it’s suboptimal to manage machines individually; instead, the machines have to be seen as part of a cluster of servers, managed via automation. Cluster automation is one the core tenants of solutions like CoreOS’ Fleet and Apache Mesos. Resources are allocated and managed as a single entity via API, configuration repositories, and automation.

You can attain automation in small steps. Start by choosing an automation tool and begin converting your existing environment one file at a time. Soon, your entire configuration is centrally available and recovering a machine or deploying a full environment is possible with a single automated process.

In addition to being able to quickly provision new machines with your binary and configuration files, you are also able to create parallel environments, such as disaster recovery, test and development, and quality assurance. Using the same provisioning process for all of your environments assures consistent environments and early detection of potential production problems. Packages, binaries, and configuration files can be treated as data and stored in something similar to object stores, which are available in some form with all cloud solutions.

Data files
The final files to be backed up and restored are the data files. These files are the most important part of a backup and restore and the hardest ones to replace. Part of the challenge is the volume of data as well as access to it. Data files are relatively easy to back up; the exception being files that are in transition, e.g., files being uploaded. Data file backups can be done with several tools, including synchronization tools or a full file backup solution. Another option is object stores, which is the natural repository for relatively static files, and allows for a pay–as-you-go model.

Database content is a bit harder to back up. Even with instant snapshots on storage, backing up databases can be challenging. A snapshot at the storage level is an option, but it doesn’t allow for a partial database restore. Also, a snapshot can capture inflight transactions that can cause issues during a restore; which is why most database systems provide a mechanism for online backups. The online backups should be leveraged in combination with tools for file backups.

Something to remember about databases: many solutions end up accumulating data even after the data is no longer used by users. The data within an active database includes data currently being used and historical data. Having current and historical data allows for data analytics on the same database, it also increases the size of the database, making database-related operations harder. It may make sense to archive older data in either other databases or flat files, which makes the database volumes manageable.


To recap, because cloud provides rapid deployment of your operating system and convenient places to store data (such as object stores), it’s easy to factor cloud into your backup and recovery strategy. By leveraging the containerization approach, you should split the content of your machines—binary, configuration, and data. Focus on automating the deployment of binaries and configuration; it allows easier delivery of an environment, including quality assurance, test, and disaster recovery. Finally, use traditional backup tools for backing up data files. These tools make it possible to rapidly and repeatedly recover complete environments while controlling the amount of backed up data that has to be managed.


1 Snapshots are not available on bare metal servers that have no virtualization capability.

July 14, 2015

Preventative Maintenance and Backups

Has your cPanel server ever gone down only to not come back online because the disk failed?

At SoftLayer, data migration is in the hands of our customers. That means you must save your data and move it to a new server. Well, thanks to a lot of slow weekends, I’ve had time to write a bash script that automates the process for you. It’s been tested in a dev environment of my own working with the data center to simulate the dreaded DRS (data retention service) when a drive fails and in a live environment to see what new curveballs could happen. In this three-part series, we’ll discuss how to do server preventative maintenance to prevent a total disaster, how to restore your backed up data (if you have backups), and finally we’ll go over the script itself to fully automate a process to backup, move, and restore all of your cPanel data safely (if the prior two aren’t options for you).

Let’s start off with some preventative maintenance first and work on setting up backups in WHM itself.

First thing you’ll need to do is log into your WHM, and then go to Home >> Backup >> Backup Configuration. You will probably have an information box at the top that says “The legacy backups system is currently disabled;” that’s fine, let it stay disabled. The legacy backup system is going away soon anyway, and the newer system allows for more customization. If you haven’t clicked “Enable” under the Global Settings, now would be the time to do so, so that the rest of the page becomes visible. Now, you should be able to modify the rest of the backup configuration, so let’s start with the type.

In my personal opinion, compressed is the only way to go. Yes, it takes longer, but uses less disk space in the end. Uncompressed uses up too much space, but it’s faster. Incremental is also not a good choice, as it only allows for one backup and it does not allow for users to include additional destinations.

The next section is scheduling and retention, and, personally, I like my backups done daily with a five-day retention plan. Yes it does use up a bit more space, but it’s also the safest because you’ll have backups from literally the day prior in case something happens.

The next section, Files, is where you will pick the users you want to backup along with what type of data you want to include. I prefer to just leave the defaulted settings here in this section and only choose my users that I want to backup instead. It’s your server though, so you’re free to enable/disable the various options as you see fit. I would definitely leave the options for backing up system files checked though as it is highly recommended to keep that option checked.

The next section deals with databases, and again, this one’s up to you. Per Account is your bare minimum option and is still safe regardless. Entire MySQL directory will just blanket backup the entire MySQL directory instead. The last option encompasses the two prior options, which to me is a bit overkill as the Per Account Only option works well enough on its own.

Now let’s start the actual configuration of the backup service. From here, we’ll choose the backup directory as well as a few other options regarding the retention and additional destinations. The best practice here is to have a drive specifically for backups, and not just another partition or a folder, but a completely separate drive. Wherever you want the backups to reside, type that path in the box. I usually have a secondary drive mounted as /backup to put them in so the pre-filled option works fine for me. The option for mounting the drive as needed should be enabled if you have a separate mount point that is not always mounted. As for the additional destination part, that’s up to you if you want to make backups of your backups. This will allow you to keep backups of the backups offsite somewhere else just in case your server decides to divide by zero or some other random issue that causes everything to go down without being recoverable. Clicking the “Create New Destination” option will bring up a new section to fill in all the data relevant to what you chose.

Once you’ve done all of this, simply click “Save Configuration.” Now you’re done!

But let’s say you’re ready to make a full backup right now instead of waiting for it to automatically run. For this, we’ll need to log in to the server via SSH and run a certain command instead. Using whatever SSH tool you prefer, PuTTY for me, connect to your server using the root username and password that you used to log into WHM. From there, we will run one simple command to backup everything - “/usr/local/cpanel/bin/backup --force” ← This will force a full backup of every user that you selected earlier when you configured the backup in WHM.

That’s pretty much it as far as preventative maintenance and backups go. Next time, we’ll go into how to restore all this content to a new drive in case something happens like someone accidentally deleting a database or a file that they really need back.


April 27, 2015

Good Documentation: A How-to Guide

As part of my job in Development Support, I write internal technical documentation for employee use only. My department is also the last line of support before a developer is called in for customer support issues, so we manage a lot of the troubleshooting documentation. Some of the documentation I write and use is designed for internal use for my position, but some of it is troubleshooting documents for other job positions within the company. I have a few guidelines that I use to improve the quality of my documentation. These are by no means definitive, but they’re some helpful tips that I’ve picked up over the years.


I’m sure everyone has met the frustration of reading a long-winded sentence that should have been three separate sentences. Keeping your sentences as short as possible helps ensure that your advice won’t go in one ear and out the other. If you can write things in a simpler way, you should do so. The goal of your documentation is to make your readers smarter.

Avoid phrasing things in a confusing way. A good example of this is how you employ parentheses. Sometimes it is necessary to use them to convey important beneficial tidbits to your readers. If you write something with parentheses in it, and you can’t read it out loud without it sounding confusing, try to re-word it, or run it by someone else.

Good: It should have "limited connectivity" (the computer icon with the exclamation point) or "active" status (the green checkmark) and NOT "retired" (the red X).
Bad: It should have the icon “limited connectivity” (basically the computer icon with the exclamation point that appears in the list) (you can see the “limited connectivity” text if you hover over it) or “active” (the green checkmark) status and NOT the red “retired” X icon.

Ideally, you should use the same formatting for all of your documentation. At the very least, you should make your formatting consistent within your document. All of our transaction troubleshooting documentation at SoftLayer uses a standardized error formatting that is consistent and easy to read. Sometimes it might be necessary to break the convention if readability is improved. For example: Collapsible menus make it hard to search the entire page using ctrl+F, but very often, it makes things more difficult.

And finally, if people continually have a slew of questions, it’s probably time to revise your documentation and make it clearer. If it’s too complex, break it down into simpler terms. Add more examples to help clarify things so that it makes sense to your end reader.


Use bullet points or numbered lists when listing things instead of a paragraph block. I mention this because good formatting saves man-hours. There’s a difference between one person having to search a document for five minutes, versus 100 people having to search a document for five minutes each. That’s over eight man-hours lost. Bullet points are much faster to skim through when you are looking for something specific in the middle of a page somewhere. Avoid the “TL;DR” effect and don’t send your readers a wall of text.

Avoid superfluous information. If you have extra information beyond what is necessary, it can have an adverse effect on your readers. Your document may be the first your readers have read on your topic, so don’t overload them with too much information.

Don’t create duplicate information. If your documentation source is electronic, keep your documentation from repeating information, and just link to it in a central location. If you have the same information in five different places, you’ll have to update it in five different places if something changes.

Break up longer documents into smaller, logical sections. Organize your information first. Figure out headings and main points. If your page seems too long, try to break it down into smaller sections. For example, you might want to separate a troubleshooting section from the product information section. If your troubleshooting section grows too large, consider moving it to its own page.


Don’t make assumptions about what the users already know. If it wasn’t covered in your basic training when you were hired, consider adding it to the documentation. This is especially important when you are documenting things for your own job position. Don’t leave out important details just because you can remember them offhand. You’re doing yourself a favor as well. Six months from now, you may need to use your documentation and you may not remember those details.

Bad:SSH to the image server and delete the offending RGX folder.
Good:SSH to the image server (imageserver.mycompany.local), and run ls -al /dev/rgx_files/ | grep blah to find the offending RGX folder and then use rm -rf /dev/rgx_files/<folder> to delete it.

Make sure your documentation covers as much ground as possible. Cover every error and every possible scenario that you can think of. Collaborate with other people to identify any areas you may have missed.

Account for errors. Error messages often give very helpful information. The error might be as straightforward as “Error: You have entered an unsupported character: ‘$.’” Make sure to document the cause and fix for it in detail. If there are unsupported characters, it might be a good idea to provide a list of unsupported characters.

If something is confusing, provide a good example. It’s usually pretty easy to identify the pain points—the things you struggle with are probably going to be difficult for your readers as well. Sometimes things can be explained better in an example than they can in a lengthy paragraph. If you were documenting a command, it might be worthwhile to provide a good example first and then break it down and explain it in detail. Images can also be very helpful in getting your point across. In documenting user interfaces, an image can be a much better choice than words. Draw red boxes or arrows to guide the reader on the procedure.


March 27, 2015

Building “A Thing” at’s Hardware Weekend

Introduction to

Over the weekend in San Francisco, I attended a very cool hackathon put together by the good folks at’s Hardware Weekend is a series of hackathons all over the country designed to bring together people with a passion for building things, give them access to industry mentors, and see what fun and exciting things they come up with in two days. The registration desk was filled with all kinds of hardware modules to be used for whatever project you could dream up—from Intel Edison boards, the Grove Starter Kit, a few other things that I have no idea what they did, and of course, plenty of stickers.

After a delicious breakfast, we heard a variety of potential product pitches by the attendees, then everyone split off into groups to support their favorite ideas and turn them into a reality.

When not hard at work coding, soldering, or wiring up devices, the attendees heard talks from a variety of industry leaders, who shared their struggles and what worked for their products. The founder of gave a great talk on how his company began and where it is today.

Building a thing!
After lunch, Phil Jackson, SoftLayer’s lead technology evangelist, gave an eloquent crash course in SoftLayer and how to get your new thing onto the Internet of Things. Phil and I have a long history in Web development, so we provided answers to many questions on that subject. But when it comes to hardware, we are fairly green. So when we weren't helping teams get into the cloud, we tried our hand at building something ourselves.

We started off with some of the hardware handouts: an Edison board and the Grove Starter Kit. We wanted to complete a project that worked in the same time the rest of the teams had—and showed off some of the power of SoftLayer, too. Our idea was to use the Grove Kit’s heat sensor, display it on the LCD, and post the result to a IBM Cloudant database, which would then be displayed on a SoftLayer server as a live updating graph.

The first day consisted mostly of Googling variations on “Edison getting started,” “read Grove heat sensor,” “write to LCD”, etc. We started off simply, by trying to make an LED blink, which was pretty easy. Making the LED STOP blinking, however, was a bit more challenging. But we eventually figured out how to stop a program from running. We had a lot of trouble getting our project to work in Python, so we eventually admitted defeat and switched to writing node.js code, which was significantly easier (mostly because everything we needed was on stackoverflow).

After we got the general idea of how these little boards worked, our project came together very quickly at the end of Day 2—and not a moment too soon. The second I shouted, “IT WORKS!” it was time for presentations—and for us to give out the lot of Raspberry Pi we brought to some lucky winners.

And, without further ado, we present to you … the winners!


This team wanted to mod out the Hackster’s DeLorean time machine to prevent Biff (or anyone else) from taking it out for a spin. They used a variety of sensors to monitor the DeLorean for any unusual or unauthorized activity, and if all else failed, were prepared to administer a deadly voltage through the steering wheel (represented by harmless LEDs in the demo) to stop the interloper from stealing their time machine. The team has a wonderful write up of the sensors they used, along with the products used to bring everything together.

This was a very energetic team who we hope will use their new Raspberry Pis to keep the space-time continuum clear.


The KegTime project aimed to make us all more responsible drinkers by using an RFID reader to measure alcohol consumption and call Uber for you when you have had enough. They used a SoftLayer server to host all the drinking data, and used it to interact with Uber’s API to call a ride at the appropriate moment. Their demo included a working (and filled) keg with a pretty fancy LED-laden tap, which was very impressive. In recognition of their efforts to make us all more responsible drinkers, we awarded them five Raspberry Pis so they can continue to build cool projects to make the world a better place.

The Future of
Although this is the end of the event in San Francisco, there are many more events coming up in the near future. I will be going to Phoenix next on March 28 and look forward to all the new projects inventors come up with.

Be happy and keep hacking!


March 18, 2015

SoftLayer, Bluemix and OpenStack: A Powerful Combination

Building and deploying applications on SoftLayer with Bluemix, IBM’s Platform as a Service (PaaS), just got a whole lot more powerful. At IBM’s Interconnect, we announced a beta service for deploying OpenStack-based virtual servers within Bluemix. Obviously, the new service is exciting because it brings together the scalable, secure, high-performance infrastructure from SoftLayer with the open, standards-based cloud management platform of OpenStack. But making the new service available via Bluemix presents a particularly unique set of opportunities.

Now Bluemix developers can deploy OpenStack-based virtual servers on SoftLayer or their own private OpenStack cloud in a consistent, developer-friendly manner. Without changing your code, your configuration, or your deployment method, you can launch your application to a local OpenStack cloud on your premises, a private OpenStack cloud you have deployed on SoftLayer bare metal servers, or to SoftLayer virtual servers within Bluemix. For instance, you could instantly fire up a few OpenStack-based virtual servers on SoftLayer to test out your new application. After you have impressed your clients and fully tested everything, you could deploy that application to a local OpenStack cloud in your own data center ̶all from within Bluemix. With Bluemix providing the ability to deploy applications across cloud deployment models, developers can create an infrastructure configuration once and deploy consistently, regardless of the stage of their application development life cycle.

OpenStack-based virtual servers on SoftLayer enable you to manage all of your virtual servers through standard OpenStack APIs and user interfaces, and leverage the tooling, knowledge and process you or your organization have already built out. So the choice is yours: you may fully manage your virtual servers directly from within the Bluemix user interface or choose standard OpenStack interface options such as the Horizon management portal, the OpenStack API or the OpenStack command line interface. For clients who are looking for enterprise-class infrastructure as a service but also wish to avoid getting locked in a vendor’s proprietary interface, our new OpenStack standard access provides clients a new choice.

Providing OpenStack-based virtual servers is just one more (albeit major) step toward our goal of providing even more OpenStack integration with SoftLayer services. For clients looking for enterprise-class Infrastructure as a Service (IaaS) available globally and accessible via standard OpenStack interfaces, OpenStack-based virtual servers on SoftLayer provide just what they are looking for.

The beta is open now for you to test deploying and running servers on the new SoftLayer OpenStack public cloud service through Bluemix. You can sign up for a Bluemix 30-day free trial.

- @marcalanjones

March 4, 2015

Docker: Containerization for Software

Before modern-day shipping, packing and transporting different shaped boxes and other oddly shaped items from ships to trucks to warehouses was difficult, inefficient, and cumbersome. That was until the modern day shipping container was introduced to the industry. These containers could easily be stacked and organized onto a cargo ship then easily transferred to a truck where it would be sent on to its final destination. Solomon Hykes, Docker founder and CTO, likens the Docker to the modern-day shipping industry’s solution for shipping goods. Docker utilizes containerization for shipping software.

Docker, an open platform for distributed applications used by developers and system administrators, leverages standard Linux container technologies and some git-inspired image management technology. Users can create containers that have everything they need to run an application just like a virtual server but are much lighter to deploy and manage. Each container has all the binaries it needs including library and middleware, configuration, and activation process. The containers can be moved around [like containers on ships] and executed in any Docker-enabled server.

Container images are built and maintained using deltas, which can be used by several other images. Sharing reduces the overall size and allows for easy image storage in Docker registries [like containers on ships]. Any user with access to the registry can download the image and activate it on any server with a couple of commands. Some organizations have development teams that build the images, which are run by their operations teams.

Docker & SoftLayer

The lightweight containers can be used on both virtual servers and bare metal servers, making Docker a nice fit with a SoftLayer offering. You get all the flexibility of a re-imaged server without the downtime. You can create red-black deployments, and mix hourly and monthly servers, both virtual and bare metal.

While many people share images on the public Docker registry, security-minded organizations will want to create a private registry by leveraging SoftLayer object storage. You can create Docker images for a private registry that will store all its information with object storage. Registries are then easy to create and move to new hosts or between data centers.

Creating a Private Docker Registry on SoftLayer

Use the following information to create a private registry that stores data with SoftLayer object storage. [All the commands below were executed on an Ubuntu 14.04 virtual server on SoftLayer.]

Optional setup step: Change Docker backend storage AuFS

Docker has several options for an image storage backend. The default backend is DeviceMapper. The option was not very stable during the test, failing to start and export images. This step may not be necessary in your specific build depending on updates of the operating system or Docker itself. The solution was to move to Another Union File System (AuFS).
  1. Install the following package to enable AuFS:
    apt-get install linux-image-extra-3.13.0-36-generic
  2. Edit /etc/init/docker.conf, and add the following line or argument:
  3. Restart Docker, and check if the backend was changed:
    service docker restart
    docker info

The command should indicate AuFS is being used. The output should look similar to the following:
Containers: 2
Images: 29
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Dirs: 33
Execution Driver: native-0.2
Kernel Version: 3.13.0-36-generic
WARNING: No swap limit support

Step 1: Create image repo

  1. Create the directory registry-os in a work directory.
  2. Create a file named Dockerfile in the registry-os directory. It should contain the following code:
    # start from a registry release known to work
    FROM registry:0.7.3
    # get the swift driver for the registry
    RUN pip install docker-registry-driver-swift==0.0.1
    # SoftLayer uses v1 auth and the sample config doesn't have an option 
    # for it so inject one
    RUN sed -i '91i\    swift_auth_version: _env:OS_AUTH_VERSION' /docker-registry/config/config_sample.yml
  3. Execute the following command from the directory that contains the registry-os directory to build the registry container:
    docker build -t registry-swift:0.7.3 registry-os

Step 2: Start it with your object storage credential

The credentials and container on the object storage must be provided in order to start the registry image. The standard Docker way of doing this is to pass the credentials as environment variables.
docker run -it -d -e SETTINGS_FLAVOR=swift -e 
OS_AUTH_URL='<a href=""></a>'     -e OS_AUTH_VERSION=1     -e
OS_CONTAINER='docker'     -e GUNICORN_WORKERS=8     -p     registry-swift:0.7.3

This example assumes we are storing images in DAL05 on a container called docker. API_USER and API_KEY are the object storage credentials you can obtain from the portal.

Step 3: Push image

An image needs to be pushed to the registry to make sure everything works. The image push involves two steps: tagging an image and pushing it to the registry.
docker tag registry-swift:0.7.3 localhost:5000/registry-swift
docker push localhost:5000/registry-swift

You can ensure that it worked by inspecting the contents of the container in the object storage.

Step 4: Get image

The image can be downloaded once successfully pushed to object storage via the registry by issuing the following command:
docker pull localhost:5000/registry-swift
Images can be downloaded from other servers by replacing localhost with the IP address to the registry server.

Final Considerations

The Docker container can be pushed throughout your infrastructure once you have created your private registry. Failure of the machine that contains the registry can be quickly mitigated by restarting the image on another node. To restart the image, make sure it’s on more than one node in the registry allowing you to leverage the SoftLayer platform and the high durability of object storage.

If you haven’t explored Docker, visit their site, and review the use cases.


February 20, 2015

Create and Deliver Marketing or Transactional Emails

The SoftLayer email delivery service is a highly scalable, cloud-based, email relay solution. In partnership with SendGrid, an email as a service provider, SoftLayer customers are able to create and deliver marketing or transactional emails via the customer portal or SendGrid APIs.

The SoftLayer email delivery service isn’t a full corporate email solution. It’s intended as a simplified method for delivering digital marketing (e.g., newsletters and coupons) and transactional content (e.g., order confirmation, shipping notice, and password reset) to customers.


Traditionally, email is first sent through an outbound mail server that’s configured and maintained in-house, which is often costly and difficult to maintain.

With the SoftLayer email delivery service, the process is simplified; the only requirement is a connection to the Internet.

Package Comparison

The following table lists the service levels available to SoftLayer customers. The Free and Basic tiers are suitable for smaller applications with lower volume requirements. The Advanced and Enterprise levels are more suitable for larger applications and customers that require enhanced monitoring and other advanced features. Note that marketing emails are only available in the Advanced and Enterprise tiers.

Getting Started

Use the following steps to sign up for the SoftLayer email delivery service.

  1. Log on to the customer portal.
  2. Click Services, Email Delivery.
  3. Click the Order Email Delivery Service link at the top of the page.
  4. Choose your desired package, and fill out the required information. Remember for marketing emails, you must select either the Advanced or Enterprise packages.

Configuring a Marketing Email

Most of your interaction will be through the vendor portal provided by SendGrid. The following steps outline how to compose and deliver a marketing email to a list of subscribers.

  1. From the SoftLayer customer portal, navigate to Services, Email Delivery Service and click Actions, Access Vendor Portal for your desired account.
  2. Once in the SendGrid portal, click the Marketing Email link.

  1. You’ll be taken to the Marketing Email Dashboard. Click the Create a Sender Address button.
  2. Fill in the required information and click Save.
  3. Navigate back to the Marketing Email Dashboard, and click the Create Recipient List button.
  4. Enter a name for the list in the List Name field. Be sure that it’s something meaningful, such as Residential Customers.

  1. You can either Upload a list of contact emails or Add recipients manually. When adding the recipients manually, you’ll be asked verify the addresses that you enter. Click the Save button when done entering addresses.

  1. Navigate back to the Marketing Email Dashboard and click the Create Marketing Email button.
  2. Enter the title of the email in the Marketing Email Title field. Under Pick a Sender Address, select either a list or select recipients for the email. Choose your content type and how to send the email. Split Test my Marketing Email, under Choose how to send your Marketing Email, is an advanced feature that lets you send different recipients different versions of the same email—sending the different versions helps determine which version is most effective.

  1. Select the list of recipients to whom the email is to be sent and click Save.

  1. Next, select the template for the email. Options include Basic, Design, and My Saved Templates.

  1. Enter your email content. Make sure to provide a message subject.
  2. Review your email, and select when you would like it sent—Send Now, based on a Schedule, or Save As Draft. Click Finish when you’re done, or Save & Exit for a draft.

  1. You will then be brought back to the Marketing Email Dashboard where you can monitor the results of your email campaign.

Setting Up a Transactional Email

The following example shows how to integrate your app with SendGrid to send new users a welcome email. This example makes use of the SendGrid template engine, although it’s not required.

  1. From the SendGrid portal, click the Template Engine button.
  2. Click the Create Template button, enter the Template Name, and click Save.

  1. Design and modify your email and click Save when finished.

  1. Your new template should now be Active and ready to be used by the API.
  2. Click the Apps link in the top navigation bar.

  1. Click the Template Engine link on the right side of the screen.

  1. Take note of the ID of the template you just created.

  1. Use the curl utility to test your email via the SendGrid Web API.
  2. Execute the following to send a test email using your new template.

curl -d 'to=&subject="Test
subject"&text="Test Body"&from=&api_user=;api_key=

For more information on how the SoftLayer email delivery service can help you get back to your core business, check out this blog post.


Worldwide Channel Solutions Architect for SoftLayer, an IBM Company

Subscribe to development