Tips And Tricks Posts

November 4, 2015

Shared, scalable, and resilient storage without SAN

Storage area networks (SAN) are used most often in the enterprise world. In many enterprises, you will see racks filled with these large storage arrays. They are mainly used to provide a centralized storage platform with limited scalability. They require special training to operate, are expensive to purchase, support, or expand, and if those devices fail, there is big trouble.

Some people might say SAN devices are a necessary evil. But are they really necessary? Aren’t there alternatives?

Most startups nowadays are running their services on commodity hardware, with smart software to distribute their content across server farms globally. Current, well established, and successful companies that run websites or apps like Whatsapp, Facebook, or LinkedIn continue to operate pretty much the same way they started. They need the ability to scale and perform at unpredictable rates all around the world, so they use commodity hardware combined with smart software. These types of companies need the features that SAN storage offers them—but with more scalable, global resiliency, and without being centralized or having to buy expensive hardware. But how do they provide server access to the same data, and how do they avoid data loss?

The answer is actually quite simple, although its technology is quite sophisticated: distributed storage.

In a world where virtualization has become a standard for most companies, where even applications and networking are being virtualized, virtualization giant VMware answers this question with Virtual SAN. It effectively eliminates the need for SAN hardware in a VMware environment (and it will also be available for purchase from SoftLayer before the end of the year). Other similar distributed products are GlusterFS (also offered in our QuantaStor solution), Ceph, Microsoft Windows DFS, Hadoop HDFS, document-oriented databases like MongoDB, and many more.

Many solutions, however, vary in maturity. Object storage is a great example of a new type of storage that has come to market, which doesn’t require SAN devices. With SoftLayer, you can and may run them all.

When you have bare metal servers set up as hypervisors or application servers, it’s likely you have a lot of drive bays within those servers, mostly unused. Stuffing them with hard drives and allowing the software to distribute your data across multiple servers in multiple locations with two or three replicas will result in a big, safe, fast, and distributed storage platform. For such a platform, scaling it would be just adding more bare metal servers with even more hard drives and letting the software handle the rest.

Nowadays we are seeing more and more hardware solutions like SAN—or even networking—being replaced with smarter software on simpler and more affordable hardware. At SoftLayer, we offer month-to-month and hourly bare metal servers with up to 36 drive bays, potentially providing a lot of room for storage. With 10Gbps global connectivity options, we offer fast, low latency networking for syncing between servers and delivering data to the customer.


October 29, 2015

How to measure the performance of striped block storage volumes

To piggyback on the performance specifications of block and file storage offerings, SoftLayer provides a high degree of volume size and performance combinations for your storage needs. But what if your storage performance or size requirements are much more specific than what is currently offered?

In this post, I’ll show you to configure and validate a sample RAID 0 configuration with:

  1. The use of LVM on CentOS to create a RAID 0 array with 3 volumes
  2. The use of FIO to apply IO load to the array
  3. The ability to measure throughput of the array

Without going into potential drawbacks of RAID 0, we should be able to observe the benefits of up to three times the throughput and size of any single volume. For example, if we needed a volume with 60GB and 240IOPS, we should be able to stripe three 20GB volumes each at 4 IOPS/GB. You can also extrapolate the benefits from this example to fit a range of performance and reliability requirements.

To start, we will provision 3x 20GB Endurance volumes at 4 IOPS/GB and make it accessible to our CentOS VM but stop short of creating a file system; e.g., you should stop once you are able to list three volumes with:

# fdisk -l | grep /dev/mapper
Disk /dev/mapper/3600a09803830344f785d46426c37364a: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/mapper/3600a09803830344f785d46426c373648: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/mapper/3600a09803830344f785d46426c373649: 21.5 GB, 21474836480 bytes, 41943040 sectors

Then proceed to create the three-stripe volume with the following commands:

# pvcreate /dev/mapper/3600a09803830344f785d46426c37364a /dev/mapper/3600a09803830344f785d46426c373648 /dev/mapper/3600a09803830344f785d46426c373649
# vgcreate new_vol_group /dev/mapper/3600a09803830344f785d46426c37364a /dev/mapper/3600a09803830344f785d46426c373648 /dev/mapper/3600a09803830344f785d46426c373649
# lvcreate -i3 -I16 -l100%FREE -nstriped_logical_volume new_vol_group

This creates a logical volume with three stripes (-i) and stripe size (-I) of 16KB with a volume size (-l) of 60GB or 100 percent of the free space.

You can now create the file system on the new logical volume, create a mount point, and mount the volume:

# mkfs.ext3 /dev/new_vol_group/striped_logical_volume
# mkdir /mnt
# mount /dev/mapper/new_vol_group-striped_logical_volume /mnt

Now download, build, and run FIO:

# yum install -y gcc libaio-devel
# cd /tmp
# wget
# tar -xvf 3aa21b8c106cab742bf1f20d60629e3f
# cd fio-2.1.10/
# make
# make install
# cd /mnt
# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=16k --iodepth=64 --size=1G --readwrite=randrw --rwmixread=50

This will execute the benchmark test at 16KB blocks (--bs), random sequence (--readwrite=randrw), at 50 percent read, and 50 percent write (rwmixread=50). This will run 64 threads (--iodepth=64) until the test file of 1GB (--size=1G) is size is completed.

Here is a snippet of output once completed:

read : io=51712KB, bw=1955.8KB/s, iops=122, runt= 26441msec
write: io=50688KB, bw=1917.3KB/s, iops=119, runt= 26441msec

This shows that throughput is rated at 122r + 119w = ~240 IOPS. To validate that it is what we expect, we provisioned 3x 20 GB x 4 IOPS/GB = 3 x 80 IOPS = 240 IOPS.

Here is a table showing how results would differ if we tuned the load with varying block sizes (--bs) :

As you can see from the results, you may not observe the expected 3x throughput (IOPS) in every case, so please be mindful of your logical volume configuration (stripe size) versus your load profile (--bs). Please refer to our FAQ for further details on other possible limits.


October 21, 2015

The Dumbest Thing I’ve Ever Said

Last week, I attended the LAUNCH Scale conference and had the pleasure of attending the VIP dinner the night before the event began. We hosted the top 10 startups from the IBM SmartCamp worldwide competition for the dinner and throughout the events. Famed Internet entrepreneur Jason Calacanis joined us for the dinner and gave a quick pep talk to the teams. He mentioned that people come up to him and lament that they wished they’d gotten into the "Internet thing" earlier—and that he's been hearing this since 1999. His story reminded me of a similar personal experience.

In the fall semester of 1995, I was a junior at St. Bonaventure University, working in the computer lab. One day after helping a cute girl I had a crush on, she said to me, “You’re so good with computers, why aren’t you a computer science major?” Swelling with pride, I tried to sound impressive and intelligent as I definitively stated, “Windows 95 just came out, and pretty much everything that can be built with computers has been built.”

Yep. Windows 95. The pinnacle of software achievement.

It is easily the dumbest thing I've ever said—and perhaps up there as one of the dumbest things anyone has said. Ever.

But I hear corollaries to this fairly often, both in and outside the startup world. "There's no room for innovation there," or "You can't make money there," or "That sector is awful, don't bother." I'm guilty of a few of those statements myself—yet businesses find a way. We live in an age of unprecedented innovation. Just because one person didn't have the key to unlock it doesn't mean the door is closed.

Catch yourself before you fall into this loop of thinking. It might mean being the "Uber of X" or starting a business that's far ahead of its time. Think it's crazy to say everything that can be built has been built? I think it's just as crazy to say, "It's too late to get into ___ market."

For example, when markets grow in size, they also grow in complexity. The first mover in the space defines the market, catches the innovators and early adopters, and builds the bridge over the chasm to the early and late majority. (For more on this, read Crossing the Chasm by Geoffrey Moore.) When a market begins to service the majority, the needs of many are not being met, which leaves room for new entrants to build a business that addresses the segments dissatisfied with the current offerings or needing specialized versions.

The LAUNCH Scale event showcased dozens of startups and the innovation out there in the world always amazes me. I'd recommend it to any startup that has built something great, and now needs to scale. Still haven't built something yourself? Think you missed the opportunity to build and create? In 1995, I didn't think about how things would change in five, 10, even 20 years. Now it's 2015 and the startup world has been growing faster than any sector in history.

Think everything that could be built has been built? Think again. Want to build something? Do it. Build something. What are you waiting for? Go make a difference in the world.


September 2, 2015

Backup and Restore in a Cloud and DevOps World

Virtualization has brought many improvements to the compute infrastructure, including snapshots and live migration1. When an infrastructure moves to the cloud, these options often become a client’s primary backup strategy. While snapshots and live migration are also part of a successful strategy, backing up on the cloud may need additional tools.

First, a basic question: Why do we take backups? They’re taken to recover from

  • The loss of an entire machine
  • Partially corrupted files
  • A complete data loss (either through hardware or human error)

While losing an entire machine is frightening, corrupted files or data loss are the more common reasons for data backups.

Snapshots are useful when the snapshot and restore occur in close proximity to each other, e.g., when you’re migrating middleware or an operating system and want to fall back quickly if something goes wrong. If you need to restore after extensive changes (hardware or data), a snapshot isn’t an adequate resource. The restore may require restoring to a new machine, selecting files to be restored, and moving data back to the original machine.

So if a snapshot isn’t the silver bullet for backing up in the cloud, what are the effective backup alternatives? The solution needs to handle a full system loss, partial data loss, or corruption, and ideally work for both virtualized and non-virtualized environments.

What to back up

There are three types of files that you’ll want to consider when backing up an active machine’s disks:

  • Binary files: Changed by operating system and middleware updates; can be easily stored and recovered.
  • Configuration files: Defined by how the binary files are connected, configured, and what data is accessible to them.
  • Data files: Generated by users and unrecoverable if not backed up. Data files are the most precious part of the disk content and losing them may result in a financial impact on the client’s business.

Keep in mind when determining your backup strategy that each file type has a different change rate—data files change faster than configuration files, which are more fluid than binary files. So, what are your options for backing up and restoring each type of file?

Binary files
In the case of a system failure, DevOps advocates (see Phoenix Servers from Martin Fowler) propose getting a new machine, which all cloud providers can automatically provision, including middleware. Automated provisioning processes are available for both bare metal and virtual machines.

Note that most Open Source products only require an Internet connection and a single command line for installation, while commercial products can be provisioned through automation.

Configuration files
Cloud-centric operations have a distinct advantage over traditional operations when it comes to backing up configuration files. With traditional operations, each element is configured manually, which has several drawbacks such as being time-consuming and error-prone. Cloud-centric operations, or DevOps, treat each configuration as code, which allows an environment to be built from a source configuration via automated tools and procedures. Tools such as Chef, Puppet, Ansible, and SaltStack show their power with central configuration repositories that are used to drive the composition of an environment. A central repository works well with another component of automated provisioning—changing the IP address and hostname.

You have limited control of how the cloud will allocate resources, so you need an automated method to collect the information and apply it to all the machines being provisioned.

In a cloud context, it’s suboptimal to manage machines individually; instead, the machines have to be seen as part of a cluster of servers, managed via automation. Cluster automation is one the core tenants of solutions like CoreOS’ Fleet and Apache Mesos. Resources are allocated and managed as a single entity via API, configuration repositories, and automation.

You can attain automation in small steps. Start by choosing an automation tool and begin converting your existing environment one file at a time. Soon, your entire configuration is centrally available and recovering a machine or deploying a full environment is possible with a single automated process.

In addition to being able to quickly provision new machines with your binary and configuration files, you are also able to create parallel environments, such as disaster recovery, test and development, and quality assurance. Using the same provisioning process for all of your environments assures consistent environments and early detection of potential production problems. Packages, binaries, and configuration files can be treated as data and stored in something similar to object stores, which are available in some form with all cloud solutions.

Data files
The final files to be backed up and restored are the data files. These files are the most important part of a backup and restore and the hardest ones to replace. Part of the challenge is the volume of data as well as access to it. Data files are relatively easy to back up; the exception being files that are in transition, e.g., files being uploaded. Data file backups can be done with several tools, including synchronization tools or a full file backup solution. Another option is object stores, which is the natural repository for relatively static files, and allows for a pay–as-you-go model.

Database content is a bit harder to back up. Even with instant snapshots on storage, backing up databases can be challenging. A snapshot at the storage level is an option, but it doesn’t allow for a partial database restore. Also, a snapshot can capture inflight transactions that can cause issues during a restore; which is why most database systems provide a mechanism for online backups. The online backups should be leveraged in combination with tools for file backups.

Something to remember about databases: many solutions end up accumulating data even after the data is no longer used by users. The data within an active database includes data currently being used and historical data. Having current and historical data allows for data analytics on the same database, it also increases the size of the database, making database-related operations harder. It may make sense to archive older data in either other databases or flat files, which makes the database volumes manageable.


To recap, because cloud provides rapid deployment of your operating system and convenient places to store data (such as object stores), it’s easy to factor cloud into your backup and recovery strategy. By leveraging the containerization approach, you should split the content of your machines—binary, configuration, and data. Focus on automating the deployment of binaries and configuration; it allows easier delivery of an environment, including quality assurance, test, and disaster recovery. Finally, use traditional backup tools for backing up data files. These tools make it possible to rapidly and repeatedly recover complete environments while controlling the amount of backed up data that has to be managed.


1 Snapshots are not available on bare metal servers that have no virtualization capability.

August 25, 2015

Free Resources for Your Startup

Building and running a startup is both difficult and expensive. From salary to servers to services, the demands on your budget are constant and come from all directions. On the Catalyst team we know this firsthand—our program was created as a way for startups to access SoftLayer's robust platform before they have revenue or funding.

After moving to Boulder, Colorado in 2012, the first startup I joined was a member of the Catalyst program. Without Catalyst, our organization would have been paying out of pocket for the bare metal servers we needed. Instead, that money was freed up for other essentials (like food to keep us alive).

Infrastructure isn't the only area in which startups can leverage free offerings. Since joining the Catalyst team one year ago, I've tracked and collected other free resources for startups. I compiled my research into a presentation that I've given at a few events. The presentation is available on SlideBean (a free online presentation platform, what else?) and is constantly being updated. Some highlights are below:

Big Company Programs
The Catalyst program is a model on how big companies can meaningfully engage with startups, and we're not the only ones doing it.

  • SVB: Silicon Valley Bank offers a program called Accelerator. Perks including free checking and financial mentorship. While saving on business checking won't make a big dent in your cash flow, the financial mentorship is top notch. The SVB team consists of experts in banking who can offer advice on fundraising, financial instruments, and cash management.
  • SendGrid: Email deliverability is crucial for your company, so start with the best in the business. The free plan includes 10,000 emails per month, up from 200 emails per day when I first started giving this talk. Go to the pricing page and scroll down to the bottom for the free plan. (Full disclosure: SendGrid is a former partner.)
  • NASDAQ Exact Equity: I was recently at a VC conference, where I had two separate conversations about investors’ frustrations with disorganized or downright undocumented cap tables. The NASDAQ Exact Equity freemium tool will not only help you wrangle your cap table, but it will also signal success to the investor by showing that you’re thorough and organized.

Startup Freebies
I'm not going to cover the basics, such as Evernote, Trello, Asana, Pivotal Tracker, Launch Rock, Bootstrap, Google Drive, etc. You probably already know about these programs. Instead, I’ll share a few great ones you may not know about.

  • Docracy: If you need any sort of legal document, Docracy should be your first stop. The legal documents were prepared by lawyers and are available for free. The choices range from SaaS Terms & Conditions to founder agreements.
  • HTML5 UP: Need a quick, easy, and responsive template for your site? When WordPress is too much of a hassle for a splash page, head over to HTML5 UP for dozens of choices of free templates.
  • UI Kit: As you're moving from the free HTML5 UP template toward being able to build out your site with the free Bootstrap toolkit, save yourself coding time and get the UI Kit for free design elements such as lightbox, slider, accordions, and more.
  • SlideBean: I love SlideBean. While searching for "free PowerPoint templates," I discovered that all the templates were hideous. Then I stumbled across SlideBean and fell in love with it. It makes putting together a presentation quick and easy, and keeps it from looking like you traveled to 1999 to get your template.

Below are my favorite collections of resources for any freebies that I haven’t already covered.

  • Product Hunt List: The founder of CrazyEgg and KISSmetics has an exhaustive list of free and freemium products for your startup.
  • Over 400 resources are grouped by category. I especially love the design resources.
  • Startup Stash: Not all of the free deals, mostly in the form of percentage discounts. But if you're going to pay for something, check F6S first for a discount.

And finally, the best piece of advice when trying to save money can be found in my last post: A Grandmother’s Advice for Startups: You never know ‘til you ask.

Have a free resource that you absolutely love that’s missing from my list? Email me at or tweet me @stoneybaby and let me know!


July 14, 2015

Preventative Maintenance and Backups

Has your cPanel server ever gone down only to not come back online because the disk failed?

At SoftLayer, data migration is in the hands of our customers. That means you must save your data and move it to a new server. Well, thanks to a lot of slow weekends, I’ve had time to write a bash script that automates the process for you. It’s been tested in a dev environment of my own working with the data center to simulate the dreaded DRS (data retention service) when a drive fails and in a live environment to see what new curveballs could happen. In this three-part series, we’ll discuss how to do server preventative maintenance to prevent a total disaster, how to restore your backed up data (if you have backups), and finally we’ll go over the script itself to fully automate a process to backup, move, and restore all of your cPanel data safely (if the prior two aren’t options for you).

Let’s start off with some preventative maintenance first and work on setting up backups in WHM itself.

First thing you’ll need to do is log into your WHM, and then go to Home >> Backup >> Backup Configuration. You will probably have an information box at the top that says “The legacy backups system is currently disabled;” that’s fine, let it stay disabled. The legacy backup system is going away soon anyway, and the newer system allows for more customization. If you haven’t clicked “Enable” under the Global Settings, now would be the time to do so, so that the rest of the page becomes visible. Now, you should be able to modify the rest of the backup configuration, so let’s start with the type.

In my personal opinion, compressed is the only way to go. Yes, it takes longer, but uses less disk space in the end. Uncompressed uses up too much space, but it’s faster. Incremental is also not a good choice, as it only allows for one backup and it does not allow for users to include additional destinations.

The next section is scheduling and retention, and, personally, I like my backups done daily with a five-day retention plan. Yes it does use up a bit more space, but it’s also the safest because you’ll have backups from literally the day prior in case something happens.

The next section, Files, is where you will pick the users you want to backup along with what type of data you want to include. I prefer to just leave the defaulted settings here in this section and only choose my users that I want to backup instead. It’s your server though, so you’re free to enable/disable the various options as you see fit. I would definitely leave the options for backing up system files checked though as it is highly recommended to keep that option checked.

The next section deals with databases, and again, this one’s up to you. Per Account is your bare minimum option and is still safe regardless. Entire MySQL directory will just blanket backup the entire MySQL directory instead. The last option encompasses the two prior options, which to me is a bit overkill as the Per Account Only option works well enough on its own.

Now let’s start the actual configuration of the backup service. From here, we’ll choose the backup directory as well as a few other options regarding the retention and additional destinations. The best practice here is to have a drive specifically for backups, and not just another partition or a folder, but a completely separate drive. Wherever you want the backups to reside, type that path in the box. I usually have a secondary drive mounted as /backup to put them in so the pre-filled option works fine for me. The option for mounting the drive as needed should be enabled if you have a separate mount point that is not always mounted. As for the additional destination part, that’s up to you if you want to make backups of your backups. This will allow you to keep backups of the backups offsite somewhere else just in case your server decides to divide by zero or some other random issue that causes everything to go down without being recoverable. Clicking the “Create New Destination” option will bring up a new section to fill in all the data relevant to what you chose.

Once you’ve done all of this, simply click “Save Configuration.” Now you’re done!

But let’s say you’re ready to make a full backup right now instead of waiting for it to automatically run. For this, we’ll need to log in to the server via SSH and run a certain command instead. Using whatever SSH tool you prefer, PuTTY for me, connect to your server using the root username and password that you used to log into WHM. From there, we will run one simple command to backup everything - “/usr/local/cpanel/bin/backup --force” ← This will force a full backup of every user that you selected earlier when you configured the backup in WHM.

That’s pretty much it as far as preventative maintenance and backups go. Next time, we’ll go into how to restore all this content to a new drive in case something happens like someone accidentally deleting a database or a file that they really need back.


April 27, 2015

Good Documentation: A How-to Guide

As part of my job in Development Support, I write internal technical documentation for employee use only. My department is also the last line of support before a developer is called in for customer support issues, so we manage a lot of the troubleshooting documentation. Some of the documentation I write and use is designed for internal use for my position, but some of it is troubleshooting documents for other job positions within the company. I have a few guidelines that I use to improve the quality of my documentation. These are by no means definitive, but they’re some helpful tips that I’ve picked up over the years.


I’m sure everyone has met the frustration of reading a long-winded sentence that should have been three separate sentences. Keeping your sentences as short as possible helps ensure that your advice won’t go in one ear and out the other. If you can write things in a simpler way, you should do so. The goal of your documentation is to make your readers smarter.

Avoid phrasing things in a confusing way. A good example of this is how you employ parentheses. Sometimes it is necessary to use them to convey important beneficial tidbits to your readers. If you write something with parentheses in it, and you can’t read it out loud without it sounding confusing, try to re-word it, or run it by someone else.

Good: It should have "limited connectivity" (the computer icon with the exclamation point) or "active" status (the green checkmark) and NOT "retired" (the red X).
Bad: It should have the icon “limited connectivity” (basically the computer icon with the exclamation point that appears in the list) (you can see the “limited connectivity” text if you hover over it) or “active” (the green checkmark) status and NOT the red “retired” X icon.

Ideally, you should use the same formatting for all of your documentation. At the very least, you should make your formatting consistent within your document. All of our transaction troubleshooting documentation at SoftLayer uses a standardized error formatting that is consistent and easy to read. Sometimes it might be necessary to break the convention if readability is improved. For example: Collapsible menus make it hard to search the entire page using ctrl+F, but very often, it makes things more difficult.

And finally, if people continually have a slew of questions, it’s probably time to revise your documentation and make it clearer. If it’s too complex, break it down into simpler terms. Add more examples to help clarify things so that it makes sense to your end reader.


Use bullet points or numbered lists when listing things instead of a paragraph block. I mention this because good formatting saves man-hours. There’s a difference between one person having to search a document for five minutes, versus 100 people having to search a document for five minutes each. That’s over eight man-hours lost. Bullet points are much faster to skim through when you are looking for something specific in the middle of a page somewhere. Avoid the “TL;DR” effect and don’t send your readers a wall of text.

Avoid superfluous information. If you have extra information beyond what is necessary, it can have an adverse effect on your readers. Your document may be the first your readers have read on your topic, so don’t overload them with too much information.

Don’t create duplicate information. If your documentation source is electronic, keep your documentation from repeating information, and just link to it in a central location. If you have the same information in five different places, you’ll have to update it in five different places if something changes.

Break up longer documents into smaller, logical sections. Organize your information first. Figure out headings and main points. If your page seems too long, try to break it down into smaller sections. For example, you might want to separate a troubleshooting section from the product information section. If your troubleshooting section grows too large, consider moving it to its own page.


Don’t make assumptions about what the users already know. If it wasn’t covered in your basic training when you were hired, consider adding it to the documentation. This is especially important when you are documenting things for your own job position. Don’t leave out important details just because you can remember them offhand. You’re doing yourself a favor as well. Six months from now, you may need to use your documentation and you may not remember those details.

Bad:SSH to the image server and delete the offending RGX folder.
Good:SSH to the image server (imageserver.mycompany.local), and run ls -al /dev/rgx_files/ | grep blah to find the offending RGX folder and then use rm -rf /dev/rgx_files/<folder> to delete it.

Make sure your documentation covers as much ground as possible. Cover every error and every possible scenario that you can think of. Collaborate with other people to identify any areas you may have missed.

Account for errors. Error messages often give very helpful information. The error might be as straightforward as “Error: You have entered an unsupported character: ‘$.’” Make sure to document the cause and fix for it in detail. If there are unsupported characters, it might be a good idea to provide a list of unsupported characters.

If something is confusing, provide a good example. It’s usually pretty easy to identify the pain points—the things you struggle with are probably going to be difficult for your readers as well. Sometimes things can be explained better in an example than they can in a lengthy paragraph. If you were documenting a command, it might be worthwhile to provide a good example first and then break it down and explain it in detail. Images can also be very helpful in getting your point across. In documenting user interfaces, an image can be a much better choice than words. Draw red boxes or arrows to guide the reader on the procedure.


April 24, 2015

Working Well With Your Employees

In the past 17 years I’ve worked in a clean-room laboratory environment as an in-house tech support person managing windows machines around dangerous lasers and chemicals, in the telecommunications industry as a systems analyst and software engineer, and in the hosting industry as a lead developer, software architect, and manager of development. In every case, the following guiding principles have served me well, both as an employee striving to learn more and be a better contributor and as a manager striving to be a worthy employer of rising talent. Whether you are a manager or a startup CEO, this advice will help you cultivate success for you and your employees.

Hire up.
When you’re starting out, you will likely wear many hats out of necessity, but as your company grows, these hats need to be given to others. Hire the best talent you can, and rely on their expertise. Don’t be intimidated by intelligence—embrace it and don’t let your ego stand in the way. Also, be aware that faulty assumptions about someone’s skill set can throw off deadlines and cause support issues down the road. Empowering people increases a sense of ownership and pride in one’s work.

Stay curious.
IBM has reinvented itself over and over. It has done this to keep up with the ever-changing industry with the help of curious employees. Curious people ask more questions, dig deeper, and they find creative solutions to current industry needs. Don’t pour cold water on your employees who want to do things differently. Listen to them with an open mind. Change is sometimes required, and it comes through innovation by curious people.

Integrate and automate everything.
Take a cue from SoftLayer: If you find yourself performing a repetitive task, automate and document it. We’ve focused on automation since day one. Not only do we automate server provisioning, but we’ve also automated our development build processes so that we can achieve repeatable success in code releases. Do your best to automate yourself out of a job and encourage others to live by this mantra. Don’t trade efficiency for job security—those who excel in this should be given more responsibility.

Peace of mind is worth a lot.
Once a coworker and I applied to contract for a job internally because our company was about to spend millions farming it out to a third party. We knew we could do it faster and cheaper, but the company went with the third party instead. Losing that contract taught me that companies are willing to pay handsomely for peace of mind. If you can build a team that is that source of that peace of mind for your company, you will go far.

When things don’t go right.
Sometimes things go off the rails, and there’s nothing you can do about it. People make mistakes. Deadlines are missed. Contracts fall through. In these situations, it’s important to focus on where the process went wrong and put changes in place to keep it from happening again. This is more beneficial to your team than finger pointing. If you can learn from your mistakes, you will create an environment that is agile and successful.

- Jason

March 20, 2015

Startups: Always Be Hiring

In late 2014, I was at a Denver job fair promoting an event I was organizing, NewCo Boulder. All the usual suspects of the Colorado tech community were there; companies ranging in size from 50 to 500 employees. It's a challenge to stand out from the crowd when vying for the best talent in this competitive job market, so the companies had pop-up banners, posters, swag of every kind on the table, and swarms of teams clad in company t-shirts to talk to everyone who walked by.

Nestled amid the dizzying display of logos was MediaNest, a three-person, pre-funding startup in the Catalyst program, at the time they were in the Boomtown Boulder fall 2014 cohort. What the heck was a scrappy startup doing among the top Colorado tech companies? In a word: hiring.

MediaNest was there to hire for three roles: front end developer, back end developer, and sales representative. They were there to double the size of their team ... when they had the money. In the war for talent, they started early and were doing it right.

I've often heard VCs (venture capitalists) and highly successful startup CEOs say the primary roles for a startup CEO are to always keep money in the bank and butts in seats. Both take tremendous time and energy, and they go hand-in-hand. It takes months to close a funding round, and similarly, it takes months to fill roles with the right people. If you're just getting started with hiring once that money is in the bank, you're starting from a deficit, burning capital, and straining resources while you get the recruiting gears going.

The number one resource for startup hiring is personal networks. Start with your friends and acquaintances and let everyone know you're looking to fill specific roles, even as you're out raising the capital to pay them. As the round gets closer to closing, intensify your efforts and expand your reach.

But what happens if you find someone perfect before you’re ready to hire them? Julien Khaleghy, CEO of MediaNest, says, "It's a tricky question. We will tend to be generous on the equity portion and conservative on the salary portion. If a comfortable salary is a requirement for the person, we will lock them for our next round of funding."

MediaNest wasn’t funded when I saw them in Denver, and they weren’t ready to make offers, so why attend a job fair? Khaleghy adds, based on his experience as CEO, "It's actually a good thing to show a letter of intent to hire someone when you are raising money."

At that job fair in Denver, MediaNest, with its simple table and two of the co-founders present, was just as busy that day as the companies with a full complement of staff giving away every piece of imaginable swag. I recommend following their example and getting ahead of the hiring game.

As long as you're successful, you'll never stop hiring. So start today.


March 4, 2015

Docker: Containerization for Software

Before modern-day shipping, packing and transporting different shaped boxes and other oddly shaped items from ships to trucks to warehouses was difficult, inefficient, and cumbersome. That was until the modern day shipping container was introduced to the industry. These containers could easily be stacked and organized onto a cargo ship then easily transferred to a truck where it would be sent on to its final destination. Solomon Hykes, Docker founder and CTO, likens the Docker to the modern-day shipping industry’s solution for shipping goods. Docker utilizes containerization for shipping software.

Docker, an open platform for distributed applications used by developers and system administrators, leverages standard Linux container technologies and some git-inspired image management technology. Users can create containers that have everything they need to run an application just like a virtual server but are much lighter to deploy and manage. Each container has all the binaries it needs including library and middleware, configuration, and activation process. The containers can be moved around [like containers on ships] and executed in any Docker-enabled server.

Container images are built and maintained using deltas, which can be used by several other images. Sharing reduces the overall size and allows for easy image storage in Docker registries [like containers on ships]. Any user with access to the registry can download the image and activate it on any server with a couple of commands. Some organizations have development teams that build the images, which are run by their operations teams.

Docker & SoftLayer

The lightweight containers can be used on both virtual servers and bare metal servers, making Docker a nice fit with a SoftLayer offering. You get all the flexibility of a re-imaged server without the downtime. You can create red-black deployments, and mix hourly and monthly servers, both virtual and bare metal.

While many people share images on the public Docker registry, security-minded organizations will want to create a private registry by leveraging SoftLayer object storage. You can create Docker images for a private registry that will store all its information with object storage. Registries are then easy to create and move to new hosts or between data centers.

Creating a Private Docker Registry on SoftLayer

Use the following information to create a private registry that stores data with SoftLayer object storage. [All the commands below were executed on an Ubuntu 14.04 virtual server on SoftLayer.]

Optional setup step: Change Docker backend storage AuFS

Docker has several options for an image storage backend. The default backend is DeviceMapper. The option was not very stable during the test, failing to start and export images. This step may not be necessary in your specific build depending on updates of the operating system or Docker itself. The solution was to move to Another Union File System (AuFS).
  1. Install the following package to enable AuFS:
    apt-get install linux-image-extra-3.13.0-36-generic
  2. Edit /etc/init/docker.conf, and add the following line or argument:
  3. Restart Docker, and check if the backend was changed:
    service docker restart
    docker info

The command should indicate AuFS is being used. The output should look similar to the following:
Containers: 2
Images: 29
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Dirs: 33
Execution Driver: native-0.2
Kernel Version: 3.13.0-36-generic
WARNING: No swap limit support

Step 1: Create image repo

  1. Create the directory registry-os in a work directory.
  2. Create a file named Dockerfile in the registry-os directory. It should contain the following code:
    # start from a registry release known to work
    FROM registry:0.7.3
    # get the swift driver for the registry
    RUN pip install docker-registry-driver-swift==0.0.1
    # SoftLayer uses v1 auth and the sample config doesn't have an option 
    # for it so inject one
    RUN sed -i '91i\    swift_auth_version: _env:OS_AUTH_VERSION' /docker-registry/config/config_sample.yml
  3. Execute the following command from the directory that contains the registry-os directory to build the registry container:
    docker build -t registry-swift:0.7.3 registry-os

Step 2: Start it with your object storage credential

The credentials and container on the object storage must be provided in order to start the registry image. The standard Docker way of doing this is to pass the credentials as environment variables.
docker run -it -d -e SETTINGS_FLAVOR=swift -e 
OS_AUTH_URL='<a href=""></a>'     -e OS_AUTH_VERSION=1     -e
OS_CONTAINER='docker'     -e GUNICORN_WORKERS=8     -p     registry-swift:0.7.3

This example assumes we are storing images in DAL05 on a container called docker. API_USER and API_KEY are the object storage credentials you can obtain from the portal.

Step 3: Push image

An image needs to be pushed to the registry to make sure everything works. The image push involves two steps: tagging an image and pushing it to the registry.
docker tag registry-swift:0.7.3 localhost:5000/registry-swift
docker push localhost:5000/registry-swift

You can ensure that it worked by inspecting the contents of the container in the object storage.

Step 4: Get image

The image can be downloaded once successfully pushed to object storage via the registry by issuing the following command:
docker pull localhost:5000/registry-swift
Images can be downloaded from other servers by replacing localhost with the IP address to the registry server.

Final Considerations

The Docker container can be pushed throughout your infrastructure once you have created your private registry. Failure of the machine that contains the registry can be quickly mitigated by restarting the image on another node. To restart the image, make sure it’s on more than one node in the registry allowing you to leverage the SoftLayer platform and the high durability of object storage.

If you haven’t explored Docker, visit their site, and review the use cases.


Subscribe to tips-and-tricks