March 30, 2015

The Importance of Data's Physical Location in the Cloud

If top-tier cloud providers use similar network hardware in their data centers and connect to the same transit and peering bandwidth providers, how can SoftLayer claim to provide the best network performance in the cloud computing industry?

Over the years, I've heard variations of that question asked dozens of times, and it's fairly easy to answer with impressive facts and figures. All SoftLayer data centers and network points of presence (PoPs) are connected to our unique global network backbone, which carries public, private, and management traffic to and from servers. Using our network connectivity table, some back-of-the-envelope calculations reveal that we have more than 2,500Gbps of bandwidth connectivity with some of the largest transit and peering bandwidth providers in the world (and that total doesn't even include the private peering relationships we have with other providers in various regional markets). Additionally, customers may order servers with up to 10Gbps network ports in our data centers.

For the most part, those stats explain our differentiation, but part of the bigger network performance story is still missing, and to a certain extent it has been untold—until today.

The 2,500+Gbps of bandwidth connectivity we break out in the network connectivity table only accounts for the on-ramps and off-ramps of our network. Our global network backbone is actually made up of an additional 2,600+Gbps of bandwidth connectivity ... and all of that backbone connectivity transports SoftLayer-related traffic.

This robust network architecture streamlines the access to and delivery of data on SoftLayer servers. When you access a SoftLayer server, the network is designed to bring you onto our global backbone as quickly as possible at one of our network PoPs, and when you're on our global backbone, you'll experience fewer hops (and a more direct route that we control). When one of your users requests data from your SoftLayer server, that data travels across the global backbone to the nearest network PoP, where it is handed off to another provider to carry the data the "last mile."

With this controlled environment, I decided to undertake an impromptu science experiment to demonstrate how location and physical distance affect network performance in the cloud.

Speed Testing on the SoftLayer Global Network Backbone

I work in the SoftLayer office in downtown Houston, Texas. In network-speak, this location is HOU04. You won't find that location on any data center or network tables because it's just an office, but it's connected to the same global backbone as our data centers and network points of presence. From my office, the "last mile" doesn't exist; when I access a SoftLayer server, my bits and bytes only travel across the SoftLayer network, so we're effectively cutting out a number of uncontrollable variables in the process of running network speed tests.

For better or worse, I didn't tell any network engineers that I planned to run speed tests to every available data center and share the results I found, so you're seeing exactly what I saw with no tomfoolery. I just fired up my browser, headed to our Data Centers page, and made my way down the list using the SpeedTest option for each facility. Customers often go through this process when trying to determine the latency, speeds, and network path that they can expect from servers in each data center, but if we look at the results collectively, we can learn a lot more about network performance in general.

With the results, we'll discuss how network speed tests work, what the results mean, and why some might be surprising. If you're feeling scientific and want to run the tests yourself, you're more than welcome to do so.

The Ookla SpeedTests we link to from the data centers table measured the latency (ping time), jitter (variation in latency), download speeds, and upload speeds between the user's computer and the data center's test server. To run this experiment, I connected my MacBook Pro via Ethernet to a 100Mbps wired connection. At the end of each speed test, I took a screenshot of the performance stats:

SoftLayer Network Speed Test

To save you the trouble of trying to read all of the stats on each data center as they cycle through that animated GIF, I also put them into a table (click the data center name to see its results screenshot in a new window):

Data Center Latency (ms) Download Speed (Mbps) Upload Speed (Mbps) Jitter (ms)
AMS01 121 77.69 82.18 1
DAL01 9 93.16 87.43 0
DAL05 7 93.16 83.77 0
DAL06 7 93.11 83.50 0
DAL07 8 93.08 83.60 0
DAL09 11 93.05 82.54 0
FRA02 128 78.11 85.08 0
HKG02 184 50.75 78.93 2
HOU02 2 93.12 83.45 1
LON02 114 77.41 83.74 2
MEL01 186 63.40 78.73 1
MEX01 27 92.32 83.29 1
MON01 52 89.65 85.94 3
PAR01 127 82.40 83.38 0
SJC01 44 90.43 83.60 1
SEA01 50 90.33 83.23 2
SNG01 195 40.35 72.35 1
SYD01 196 61.04 75.82 4
TOK02 135 75.63 82.20 2
TOR01 40 90.37 82.90 1
WDC01 43 89.68 84.35 0

By performing these speed tests on the SoftLayer network, we can actually learn a lot about how speed tests work and how physical location affects network performance. But before we get into that, let's take note of a few interesting results from the table above:

  • The lowest latency from my office is to the HOU02 (Houston, Texas) data center. That data center is about 14.2 miles away as the crow flies.
  • The highest latency results from my office are to the SYD01 (Sydney, Australia) and SNG01 (Singapore) data centers. Those data centers are at least 8,600 and 10,000 miles away, respectively.
  • The fastest download speed observed is 93.16Mbps, and that number was seen from two data centers: DAL01 and DAL05.
  • The slowest download speed observed is 40.35Mbps from SNG01.
  • The fastest upload speed observed is 87.43Mbps to DAL01.
  • The slowest upload speed observed is 72.35Mbps to SNG01.
  • The upload speeds observed are faster than the download speeds from every data center outside of North America.

Are you surprised that we didn't see any results closer to 100Mbps? Is our server in Singapore underperforming? Are servers outside of North America more selfish to receive data and stingy to give it back?

Those are great questions, and they actually jumpstart an explanation of how the network tests work and what they're telling us.

Maximum Download Speed on 100Mbps Connection

If my office is 2 milliseconds from the test server in HOU02, why is my download speed only 93.12Mbps? To answer this question, we need to understand that to perform these tests, a connection is made using Transmission Control Protocol (TCP) to move the data, and TCP does a lot of work in the background. The download is broken into a number of tiny chunks called packets and sent from the sender to the receiver. TCP wants to ensure that each packet that is sent is received, so the receiver sends an acknowledgement back to the sender to confirm that the packet arrived. If the sender is unable to verify that a given packet was successfully delivered to the receiver, the sender will resend the packet.

This system is pretty simple, but in actuality, it's very dynamic. TCP wants to be as efficient as possible ... to send the fewest number of packets to get the entire message across. To accomplish this, TCP is able to modify the size of each packet to optimize it for each communication. The receiver dictates how large the packet should be by providing a receive window to accommodate a small packet size, and it analyzes and adjusts the receive window to get the largest packets possible without becoming unstable. Some operating systems are better than others when it comes to tweaking and optimizing TCP transfer rates, but the processes TCP takes to ensure that the packets are sent and received without error takes overhead, and that overhead limits the maximum speed we can achieve.

Understanding the SNG01 Results

Why did my SNG01 speed test max out at a meager 40.35Mbps on my 100Mbps connection? Well, now that we understand how TCP is working behind the scenes, we can see why our download speeds from Singapore are lower than we'd expect. Latency between the sending and successful receipt of a packet plays into TCP’s considerations of a stable connection. Higher ping times will cause TCP to send smaller packet sizes than it would for lower ping times to ensure that no sizable packet is lost (which would have to be reproduced and resent).

With our global backbone optimizing the network path of the packets between Houston and Singapore, the more than 10,000-mile journey, the nature of TCP, and my computer's TCP receive window adjustments all factor into the download speeds recorded from SNG01. Looking at the results in the context of the distance the data has to travel, our results are actually well within the expected performance.

Because the default behavior of TCP is partially to blame for the results, we could actually tweak the test and tune our configurations to deliver faster speeds. To confirm that improvements can be made relatively easily, we can actually just look at the answer to our third question...

Upload > Download?

Why are the upload speeds faster than the download speeds after latency jumps from 50ms to 114ms? Every location in North America is within 2,000 miles of Houston, while the closest location outside of North America is about 5,000 miles away. With what we've learned about how TCP and physical distance play into download speeds, that jump in distance explains why the download speeds drop from 90.33Mbps to 77.41Mbps as soon as we cross an ocean, but how can the upload speeds to Europe (and even APAC) stay on par with their North American counterparts? The only difference between our download path and upload path is which side is sending and which side is receiving. And if the receiver determines the size of the TCP receive window, the most likely culprit in the discrepancy between download and upload speeds is TCP windowing.

A Linux server is built and optimized to be a server, whereas my MacOSX laptop has a lot of other responsibilities, so it shouldn't come as a surprise that the default TCP receive window handling is better on the server side. With changes to the way my laptop handles TCP, download speeds would likely be improved significantly. Additionally, if we wanted to push the envelope even further, we might consider using a different transfer protocol to take advantage of the consistent, controlled network environment.

The Importance of Physical Location in Cloud Computing

These real-world test results under controlled conditions demonstrate the significance of data's geographic proximity to its user on the user's perceived network performance. We know that the network latency in a 14-mile trip will be lower than the latency in a 10,000-mile trip, but we often don't think about the ripple effect latency has on other network performance indicators. And this experiment actually controls a lot of other variables that can exacerbate the performance impact of geographic distance. The tests were run on a 100Mbps connection because that's a pretty common maximum port speed, but if we ran the same tests on a GigE line, the difference would be even more dramatic. Proof: HOU02 @ 1Gbps v. SNG01 @ 1Gbps

Let's apply our experiment to a real-world example: Half of our site's user base is in Paris and the other half is in Singapore. If we chose to host our cloud infrastructure exclusively from Paris, our users would see dramatically different results. Users in Paris would have sub-10ms latency while users in Singapore have about 300ms of latency. Obviously, operating cloud servers in both markets would be the best way to ensure peak performance in both locations, but what if you can only afford to provision your cloud infrastructure in one location? Where would you choose to provision that infrastructure to provide a consistent user experience for your audience in both markets?

Given what we've learned, we should probably choose a location with roughly the same latency to both markets. We can use the SoftLayer Looking Glass to see that San Jose, California (SJC01) would be a logical midpoint ... At this second, the latency between SJC and PAR on the SoftLayer backbone is 149ms, and the latency between SJC and SNG is 162ms, so both would experience very similar performance (all else being equal). Our users in the two markets won't experience mind-blowing speeds, but neither will experience mind-numbing speeds either.

The network performance implications of physical distance apply to all cloud providers, but because of the SoftLayer global network backbone, we're able to control many of the variables that lead to higher (or inconsistent) latency to and from a given data center. The longer a single provider can route traffic, the more efficiently that traffic will move. You might see the same latency speeds to another provider's cloud infrastructure from a given location at a given time across the public Internet, but you certainly won't see the same consistency from all locations at all times. SoftLayer has spent millions of dollars to build, maintain, and grow our global network backbone to transport public and private network traffic, and as a result, we feel pretty good about claiming to provide the best network performance in cloud computing.


March 27, 2015

Building “A Thing” at’s Hardware Weekend

Introduction to

Over the weekend in San Francisco, I attended a very cool hackathon put together by the good folks at’s Hardware Weekend is a series of hackathons all over the country designed to bring together people with a passion for building things, give them access to industry mentors, and see what fun and exciting things they come up with in two days. The registration desk was filled with all kinds of hardware modules to be used for whatever project you could dream up—from Intel Edison boards, the Grove Starter Kit, a few other things that I have no idea what they did, and of course, plenty of stickers.

After a delicious breakfast, we heard a variety of potential product pitches by the attendees, then everyone split off into groups to support their favorite ideas and turn them into a reality.

When not hard at work coding, soldering, or wiring up devices, the attendees heard talks from a variety of industry leaders, who shared their struggles and what worked for their products. The founder of gave a great talk on how his company began and where it is today.

Building a thing!
After lunch, Phil Jackson, SoftLayer’s lead technology evangelist, gave an eloquent crash course in SoftLayer and how to get your new thing onto the Internet of Things. Phil and I have a long history in Web development, so we provided answers to many questions on that subject. But when it comes to hardware, we are fairly green. So when we weren't helping teams get into the cloud, we tried our hand at building something ourselves.

We started off with some of the hardware handouts: an Edison board and the Grove Starter Kit. We wanted to complete a project that worked in the same time the rest of the teams had—and showed off some of the power of SoftLayer, too. Our idea was to use the Grove Kit’s heat sensor, display it on the LCD, and post the result to a IBM Cloudant database, which would then be displayed on a SoftLayer server as a live updating graph.

The first day consisted mostly of Googling variations on “Edison getting started,” “read Grove heat sensor,” “write to LCD”, etc. We started off simply, by trying to make an LED blink, which was pretty easy. Making the LED STOP blinking, however, was a bit more challenging. But we eventually figured out how to stop a program from running. We had a lot of trouble getting our project to work in Python, so we eventually admitted defeat and switched to writing node.js code, which was significantly easier (mostly because everything we needed was on stackoverflow).

After we got the general idea of how these little boards worked, our project came together very quickly at the end of Day 2—and not a moment too soon. The second I shouted, “IT WORKS!” it was time for presentations—and for us to give out the lot of Raspberry Pi we brought to some lucky winners.

And, without further ado, we present to you … the winners!


This team wanted to mod out the Hackster’s DeLorean time machine to prevent Biff (or anyone else) from taking it out for a spin. They used a variety of sensors to monitor the DeLorean for any unusual or unauthorized activity, and if all else failed, were prepared to administer a deadly voltage through the steering wheel (represented by harmless LEDs in the demo) to stop the interloper from stealing their time machine. The team has a wonderful write up of the sensors they used, along with the products used to bring everything together.

This was a very energetic team who we hope will use their new Raspberry Pis to keep the space-time continuum clear.


The KegTime project aimed to make us all more responsible drinkers by using an RFID reader to measure alcohol consumption and call Uber for you when you have had enough. They used a SoftLayer server to host all the drinking data, and used it to interact with Uber’s API to call a ride at the appropriate moment. Their demo included a working (and filled) keg with a pretty fancy LED-laden tap, which was very impressive. In recognition of their efforts to make us all more responsible drinkers, we awarded them five Raspberry Pis so they can continue to build cool projects to make the world a better place.

The Future of
Although this is the end of the event in San Francisco, there are many more events coming up in the near future. I will be going to Phoenix next on March 28 and look forward to all the new projects inventors come up with.

Be happy and keep hacking!


March 25, 2015

Introducing New Block Storage and File Storage

Everyone knows data growth is exploding. The chart below illustrates data growth—in zettabytes—over the last 11 years.

Storing all that data can get complicated. The rise of cloud computing and virtualization has led to myriad options for data storage. Kevin Trachier did a great job of defining and highlighting the differences in various cloud storage options in his blog post, Which storage solution is best for your project?

Today, I’m excited to announce that we’ve expanded SoftLayer’s cloud storage portfolio to include two new storage products: block storage and file storage, both featuring Performance and Endurance options. These storage offerings allow you to create storage volumes or shares and connect them to your bare metal or virtual servers using either NFS or iSCSI connectivity.

The Endurance and Performance classes of both block storage and file storage feature:

  • Storage sizes to fit any application—from 20GB to 12TB
  • Highly available connectivity—redundant networking connections reduce risk and mitigate against unplanned events to provide business continuity
  • Allocated IOPS—meet any workload requirement through customizable levels of IOPS that are there when you need them
  • Durable and Resilient —infrastructure provides safety of mind against data loss without managing system-level RAID arrays
  • Concurrent Access—multiple hosts can simultaneously access both block and file volumes in support of advanced use cases such as clustered databases

The Endurance class of both block storage and file storage is available in three tiers, allowing you can choose the right balance of performance and cost for your needs:

  • 0.25 IOPS per GB is designed for workloads with low I/O intensity. Example applications include storing mailboxes or departmental level file shares.
  • 2 IOPS per GB is designed for most general purpose use. Example applications include hosting small databases backing Web applications or virtual machine disk images for a hypervisor.
  • 4 IOPS per GB is designed for higher intensity workloads. Example applications include transactional and other performance-sensitive databases.

All Endurance tiers support snapshots and replication to remote data centers.

We designed the Performance class of both block storage and file storage to support high I/O applications like relational databases that require consistent levels of performance. Block volumes and file shares can be provisioned with up to 6,000 IOPS and 96MB/s of throughput.

Available sizes and IOPS combinations:

Block storage and file storage are available in SoftLayer data centers worldwide. SoftLayer customers can log in to the customer portal and start using them today.


March 23, 2015

Redefining the Startup Accelerator Business Model: An Interview with HIGHLINE’S Marcus Daniels

In this interview, SoftLayer’s community development lead in Canada, Qasim Virjee, sits down with Marcus Daniels, the co-founder and CEO of HIGHLINE, a venture-backed accelerator based in Vancouver and Toronto.

QV: Y Combinator has become an assumed standard for accelerators by creating its own business model. What do you think is both good and bad about this?

MD: Y Combinator (YC) not only created a new model for funding tech startups, but it also evolved the whole category. Historically, I like to think that Bill Gross's Idealab represented accelerator/incubator 1.0 and YC evolved that to 2.0 over the past decade, resulting in a hit parade of meaningful startups that are changing the world.

The good is that YC has created a “high quality” bar and led the standardization of micro-seed investment docs for the betterment of the whole startup ecosystem. It proved the model and has helped hundreds of amazing founders with venture profile businesses that are changing the world.

The bad is that there are now thousands of accelerators/incubators globally running generic programs that don't help founders much. More than half have a horrible rate helping startups raise follow-on capital and almost all never had a single exit from a startup they invested in.

HIGHLINE has a strong track record in our short history and now sees a big opportunity to be amongst the leaders in the evolution of the accelerator industry.

QV: Many accelerators focus on streamlining a program to process cohorts of companies at regular intervals throughout the year, every year. Often, the high throughput these programs expect means they must select companies from applications, rather than the approach you seem to be taking. Can you explain how HIGHLINE is sourcing companies for investment?

MD: HIGHLINE gets over 800 applications a year and targets about 20–30 investments during that time. Out of our last 12 investments, all had either come from referral partners or the team hunting the best founders to be part of our portfolio. Over the years, we have moved from the ideation stage, which comprises the majority of inbound applications, to the MVP in market stage, which is our sweet spot now. We will also focus on low-volume, high-touch advisory support, which is why a lot of time is spent building relationships with founders and adding value to MVP-stage startups before investing helps curate better deals.

QV: Traditionally, investment vehicles (such as VC firms and accelerator programs) have been run by financial industry types, but it seems that you are taking a more entrepreneurial approach with HIGHLINE and constantly evolving your business model. What can you tell me about this?

MD: The best accelerator leaders globally are past entrepreneurs who have some investment experience given how hands-on you have to be with the companies. Without the experience of starting and growing ventures, it is really hard to help tech founders navigate the daily challenges. Also, the best founders get to choose, and they want to work with other top founders in a long-term mentor/advisory/coaching relationship.

QV: How does being “VC-backed” differentiate HIGHLINE from other accelerators?

MD: Having several VCs as investors, such as the BDC and Relay Ventures, gives us an edge in several ways. Firstly, they are not only a great quality referral network for deals, but also a huge help in getting our companies venture-ready—even if they may not invest directly. Secondly, they allow us to internally focus on a specialization in helping venture profile businesses raise follow-on capital, as opposed to the glut of programs that are optimized for entrepreneurial education and lifestyle job creation. Lastly, they put big pressure on the whole HIGHLINE team to both get results for shareholders and build something unique that can be a category leader over the next decade.

QV: Our country is physically large and this seems to have created differentiated tech startup scenes between its cities. How does HIGHLINE collapse the geographic divide by having a physical presence in both Vancouver and Toronto?

MD: HIGHLINE tries to curate and unite the best digital founders, institutional investors, and ecosystem partners across Canada. We position our offices in both Vancouver and Toronto as portfolio hubs for founders who want to be headquartered in Canada, but want to take on the world. Most importantly, we spend time in all major Canadian startup ecosystems and have plans for unique events to bring our curated community closer together.

- Qasim

March 20, 2015

Startups: Always Be Hiring

In late 2014, I was at a Denver job fair promoting an event I was organizing, NewCo Boulder. All the usual suspects of the Colorado tech community were there; companies ranging in size from 50 to 500 employees. It's a challenge to stand out from the crowd when vying for the best talent in this competitive job market, so the companies had pop-up banners, posters, swag of every kind on the table, and swarms of teams clad in company t-shirts to talk to everyone who walked by.

Nestled amid the dizzying display of logos was MediaNest, a three-person, pre-funding startup in the Catalyst program, at the time they were in the Boomtown Boulder fall 2014 cohort. What the heck was a scrappy startup doing among the top Colorado tech companies? In a word: hiring.

MediaNest was there to hire for three roles: front end developer, back end developer, and sales representative. They were there to double the size of their team ... when they had the money. In the war for talent, they started early and were doing it right.

I've often heard VCs (venture capitalists) and highly successful startup CEOs say the primary roles for a startup CEO are to always keep money in the bank and butts in seats. Both take tremendous time and energy, and they go hand-in-hand. It takes months to close a funding round, and similarly, it takes months to fill roles with the right people. If you're just getting started with hiring once that money is in the bank, you're starting from a deficit, burning capital, and straining resources while you get the recruiting gears going.

The number one resource for startup hiring is personal networks. Start with your friends and acquaintances and let everyone know you're looking to fill specific roles, even as you're out raising the capital to pay them. As the round gets closer to closing, intensify your efforts and expand your reach.

But what happens if you find someone perfect before you’re ready to hire them? Julien Khaleghy, CEO of MediaNest, says, "It's a tricky question. We will tend to be generous on the equity portion and conservative on the salary portion. If a comfortable salary is a requirement for the person, we will lock them for our next round of funding."

MediaNest wasn’t funded when I saw them in Denver, and they weren’t ready to make offers, so why attend a job fair? Khaleghy adds, based on his experience as CEO, "It's actually a good thing to show a letter of intent to hire someone when you are raising money."

At that job fair in Denver, MediaNest, with its simple table and two of the co-founders present, was just as busy that day as the companies with a full complement of staff giving away every piece of imaginable swag. I recommend following their example and getting ahead of the hiring game.

As long as you're successful, you'll never stop hiring. So start today.


March 18, 2015

SoftLayer, Bluemix and OpenStack: A Powerful Combination

Building and deploying applications on SoftLayer with Bluemix, IBM’s Platform as a Service (PaaS), just got a whole lot more powerful. At IBM’s Interconnect, we announced a beta service for deploying OpenStack-based virtual servers within Bluemix. Obviously, the new service is exciting because it brings together the scalable, secure, high-performance infrastructure from SoftLayer with the open, standards-based cloud management platform of OpenStack. But making the new service available via Bluemix presents a particularly unique set of opportunities.

Now Bluemix developers can deploy OpenStack-based virtual servers on SoftLayer or their own private OpenStack cloud in a consistent, developer-friendly manner. Without changing your code, your configuration, or your deployment method, you can launch your application to a local OpenStack cloud on your premises, a private OpenStack cloud you have deployed on SoftLayer bare metal servers, or to SoftLayer virtual servers within Bluemix. For instance, you could instantly fire up a few OpenStack-based virtual servers on SoftLayer to test out your new application. After you have impressed your clients and fully tested everything, you could deploy that application to a local OpenStack cloud in your own data center ̶all from within Bluemix. With Bluemix providing the ability to deploy applications across cloud deployment models, developers can create an infrastructure configuration once and deploy consistently, regardless of the stage of their application development life cycle.

OpenStack-based virtual servers on SoftLayer enable you to manage all of your virtual servers through standard OpenStack APIs and user interfaces, and leverage the tooling, knowledge and process you or your organization have already built out. So the choice is yours: you may fully manage your virtual servers directly from within the Bluemix user interface or choose standard OpenStack interface options such as the Horizon management portal, the OpenStack API or the OpenStack command line interface. For clients who are looking for enterprise-class infrastructure as a service but also wish to avoid getting locked in a vendor’s proprietary interface, our new OpenStack standard access provides clients a new choice.

Providing OpenStack-based virtual servers is just one more (albeit major) step toward our goal of providing even more OpenStack integration with SoftLayer services. For clients looking for enterprise-class Infrastructure as a Service (IaaS) available globally and accessible via standard OpenStack interfaces, OpenStack-based virtual servers on SoftLayer provide just what they are looking for.

The beta is open now for you to test deploying and running servers on the new SoftLayer OpenStack public cloud service through Bluemix. You can sign up for a Bluemix 30-day free trial.

- @marcalanjones

March 12, 2015

Sydney’s a Go

Transforming an empty room into a fully operational data center in just three months: Some said it couldn’t be done, but we did it. In less than three months, actually.

Placing a small team on-site and turning an empty room into a data center is what SoftLayer refers to as a Go Live. Now, of course there is more to bringing a data center online than the just the transformation of an empty room. In the months leading up to the Go Live deployment, there are details to work out, contracts to sign, and the electrical fit out (EFO) of the room itself. During my time with SoftLayer I have been involved in building several of our data centers, or SoftLayer pods as we call them. Pods are designed to facilitate infrastructure scalability, and although they have evolved over the years as newer, faster equipment has become available, the original principles behind the design are still intact—so much so that a data center technician could travel to any SoftLayer data center in the world and start working without missing a beat. And the same holds true to building a pod from the ground up. This uniformity is what allows us to fast track the build out of a new SoftLayer pod. This is one of the reasons why the Sydney data center launch was such a success.

Rewind Three Months

When we landed in Sydney on December 11, 2014, we had an empty server room and about 125 pallets of gear and equipment that had been carefully packed and shipped by our inventory and logistics team. First order of business: breaking down the pallets, inspecting the equipment for any signs of damage and checking that we received everything needed for the build. It’s really quite impressive to know that everything from screwdrivers to our 25U routers to even earplugs had been logged and accounted for. When you are more than 8,500 miles away from your base of operations, it’s imperative that the Go Live team has everything it needs on hand from the start. Something seemingly inconsequential as not having the proper screws can lead to costly delays during the build. Once everything’s been checked off, the real fun begins.

(From Left) Jackie Vong, Dennis Vollmer, Jon Bowden, Chris Stelly, Antonio Gomez, Harpal Singh, Kneeling - Zachary Schacht, Peter Panagopoulos, and Marcelo Alba

Next we set up the internal equipment that powers the pod: four rows of equipment that encompass everything from networking gear to storage to the servers that run various internal systems. Racking the internal equipment is done according to pre-planned layouts and involves far too many cage nuts, the bane of every server build technician’s existence.

Once the internal rows are completed, it’s time to start focusing on the customer rows that will contain bare metal and virtual servers. Each customer rack contains a minimum of five switches—two for the private network, two for the public network, and one out-of-band management switch. Each row has two power strips and in the case of the Sydney data center, two electrical transfer switches at the bottom of the rack that provide true power redundancy by facilitating the transfer of power from one independent feed to another in the case of an outage. Network cables from the customer racks route back to the aggregate switch rack located at the center of each row.

Right around the time we start to wrap up the internal and customer rows, a team of network engineers arrive on-site to run the interconnects between the networking gear and the rest of the internal systems and to light up the fiber lines connecting our new pod to our internal network (as well as the rest of the world). This is a big day because not only do we finally get Wi-Fi up in the pod, but no longer are we isolated on an island. We are connected, and teams thousands of miles away can begin the process of remotely logging in to configure, deploy, and test systems. The networking team will start work on configuring the switches, load balancers, and firewalls for their specific purposes. The storage team will begin the process of bringing massive storage arrays online, and information systems will start work on deploying the systems that manage the automation each pod provides.

(From Left) Zach Robbins, Grayson Schmidt, Igor Gorbatok and Alex Abin

During this time, we start the process of onboarding the newest members of the team, the local Sydney techs, who in a few short months will be responsible for managing the data center independently. But before they fully take over, customer racks are prepped and are waiting to house the final piece of the puzzle: the servers. They arrive via truck day [check out DAL05 Pod 2 truck day]; Sydney’s was around the beginning of February. Given the amount of hardware we typically receive, truck days are an event unto themselves—more than 1,500 of the newest and fastest SuperMicro servers of various shapes and sizes that will serve as the bare metal and virtual servers for our customers. Through a combination of manpower and automation, these servers get unboxed, racked, checked in, and tested before they are sold to our customers.

Now departments involved in bringing the Sydney data center online wrap up and sign off. Then we go live.

Bringing a SoftLayer pod online and on time is a beautifully choreographed process and is one of my greatest professional accomplishments. The level of coordination and cohesion required to pull it off, not once, not twice but ten times all over the world in the last year alone can’t be overstated enough.


March 6, 2015

The SLayer Standard Vol. 1 No. 7: the IBM InterConnect Edition

Last week, an estimated 21,000 IBMers, SLayers, customers and partners from around the world flooded Las Vegas, Nev. to attend the first-ever IBM InterConnect. This new conference combined three popular IBM conferences (Impact, Innovate and Pulse) into a single, premier cloud and mobile techno-topia.

What our engineers and developers did in Las Vegas after conference hours might have stayed in Las Vegas, but IBM’s InterConnect hits and announcements didn’t. Here’s a recap:

Speed to Market Wins the Cloud Computing Race
Everyone likes to go fast, and the new senior vice president for IBM Cloud, Robert LeBlanc, likes to go super-fast. “What I’m focusing on is speed,” LeBlanc says.

In this blink-and-the-market-changes world, time-to-market determines the winners and losers in cloud computing. Part of LeBlanc’s strategy is opening new SoftLayer datacenters. If you haven’t heard the news, SoftLayer will be launching Sydney and Montreal data centers in the next 30 days — with more coming soon. Stay tuned for more locations.

Read more on how LeBlanc plans to win the cloud business race.

Cloudy skies on the horizon—that’s a good thing!
Our CEO, Ginni Rometty, announced a $4 billion investment on cloud services (shared with the data analytics and mobile businesses). She’s hoping that the investment will spur $40 billion a year in revenue come 2018.

Signs of the investment could be seen as execs at InterConnect announced new hybrid services coming in 2015, including enterprise containers. [What’s a container? Read our blog post.]

In fact, hybrid was a big theme at InterConnect. “We are going to make all those clouds act like one,” says Angel Diaz, vice president of IBM cloud technologies. IBM cloud (powered by SoftLayer) will be a one-stop shop: a cloud superstore with a smorgasbord of aaS offerings.

It looks like it’ll be an exciting ride for IBM over the next couple of years. Make sure to keep up with the headlines for more announcements in the coming months.


March 4, 2015

Docker: Containerization for Software

Before modern-day shipping, packing and transporting different shaped boxes and other oddly shaped items from ships to trucks to warehouses was difficult, inefficient, and cumbersome. That was until the modern day shipping container was introduced to the industry. These containers could easily be stacked and organized onto a cargo ship then easily transferred to a truck where it would be sent on to its final destination. Solomon Hykes, Docker founder and CTO, likens the Docker to the modern-day shipping industry’s solution for shipping goods. Docker utilizes containerization for shipping software.

Docker, an open platform for distributed applications used by developers and system administrators, leverages standard Linux container technologies and some git-inspired image management technology. Users can create containers that have everything they need to run an application just like a virtual server but are much lighter to deploy and manage. Each container has all the binaries it needs including library and middleware, configuration, and activation process. The containers can be moved around [like containers on ships] and executed in any Docker-enabled server.

Container images are built and maintained using deltas, which can be used by several other images. Sharing reduces the overall size and allows for easy image storage in Docker registries [like containers on ships]. Any user with access to the registry can download the image and activate it on any server with a couple of commands. Some organizations have development teams that build the images, which are run by their operations teams.

Docker & SoftLayer

The lightweight containers can be used on both virtual servers and bare metal servers, making Docker a nice fit with a SoftLayer offering. You get all the flexibility of a re-imaged server without the downtime. You can create red-black deployments, and mix hourly and monthly servers, both virtual and bare metal.

While many people share images on the public Docker registry, security-minded organizations will want to create a private registry by leveraging SoftLayer object storage. You can create Docker images for a private registry that will store all its information with object storage. Registries are then easy to create and move to new hosts or between data centers.

Creating a Private Docker Registry on SoftLayer

Use the following information to create a private registry that stores data with SoftLayer object storage. [All the commands below were executed on an Ubuntu 14.04 virtual server on SoftLayer.]

Optional setup step: Change Docker backend storage AuFS

Docker has several options for an image storage backend. The default backend is DeviceMapper. The option was not very stable during the test, failing to start and export images. This step may not be necessary in your specific build depending on updates of the operating system or Docker itself. The solution was to move to Another Union File System (AuFS).
  1. Install the following package to enable AuFS:
    apt-get install linux-image-extra-3.13.0-36-generic
  2. Edit /etc/init/docker.conf, and add the following line or argument:
  3. Restart Docker, and check if the backend was changed:
    service docker restart
    docker info

The command should indicate AuFS is being used. The output should look similar to the following:
Containers: 2
Images: 29
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Dirs: 33
Execution Driver: native-0.2
Kernel Version: 3.13.0-36-generic
WARNING: No swap limit support

Step 1: Create image repo

  1. Create the directory registry-os in a work directory.
  2. Create a file named Dockerfile in the registry-os directory. It should contain the following code:
    # start from a registry release known to work
    FROM registry:0.7.3
    # get the swift driver for the registry
    RUN pip install docker-registry-driver-swift==0.0.1
    # SoftLayer uses v1 auth and the sample config doesn't have an option 
    # for it so inject one
    RUN sed -i '91i\    swift_auth_version: _env:OS_AUTH_VERSION' /docker-registry/config/config_sample.yml
  3. Execute the following command from the directory that contains the registry-os directory to build the registry container:
    docker build -t registry-swift:0.7.3 registry-os

Step 2: Start it with your object storage credential

The credentials and container on the object storage must be provided in order to start the registry image. The standard Docker way of doing this is to pass the credentials as environment variables.
docker run -it -d -e SETTINGS_FLAVOR=swift -e 
OS_AUTH_URL='<a href=""></a>'     -e OS_AUTH_VERSION=1     -e
OS_CONTAINER='docker'     -e GUNICORN_WORKERS=8     -p     registry-swift:0.7.3

This example assumes we are storing images in DAL05 on a container called docker. API_USER and API_KEY are the object storage credentials you can obtain from the portal.

Step 3: Push image

An image needs to be pushed to the registry to make sure everything works. The image push involves two steps: tagging an image and pushing it to the registry.
docker tag registry-swift:0.7.3 localhost:5000/registry-swift
docker push localhost:5000/registry-swift

You can ensure that it worked by inspecting the contents of the container in the object storage.

Step 4: Get image

The image can be downloaded once successfully pushed to object storage via the registry by issuing the following command:
docker pull localhost:5000/registry-swift
Images can be downloaded from other servers by replacing localhost with the IP address to the registry server.

Final Considerations

The Docker container can be pushed throughout your infrastructure once you have created your private registry. Failure of the machine that contains the registry can be quickly mitigated by restarting the image on another node. To restart the image, make sure it’s on more than one node in the registry allowing you to leverage the SoftLayer platform and the high durability of object storage.

If you haven’t explored Docker, visit their site, and review the use cases.


February 25, 2015

To Raise Capital You Need a Startup Roadshow

In the world of big finance, before a company IPOs, the CEO along with an investment banker(s) go on a global roadshow to pitch their business to potential investors, including hedge funds, major investment funds, and other portfolio managers. The purpose is simple: Drum up sales of the forthcoming stock issue. In the startup world, there are no big investment banks scheduling meetings. However, there are opportunities to do a roadshow for your startup, which is even more important than the IPO.

There were 275 IPOs in 2014, the largest number since 2000. By contrast, there are around 500,000 new businesses founded in the U.S. each year (not all of which are tech startups), approximately 225,000 angel investors in the U.S., and as of a year ago, there were 874 venture capital firms [read more]. In big finance, a few companies compete for the attention of a small, accessible group of investors. In the startup world, a large number of companies must seek capital from a huge pool of often-hard-to-find, geographically dispersed investors. Because of this, a roadshow is even more important for startups than it is for IPOs.

The SoftLayer Catalyst team works with startups in communities as big as San Francisco’s Silicon Valley to as small as Cedar Rapids, Iowa. The number one thing entrepreneurs outside of the major financing hubs ask about is how to access capital. My response is always the same: Your job isn't to bring more capital to your local community; it's to build a great company. You know where the capital is, so build something worth investing in, and then do a roadshow.

Practice Locally

Thankfully, as the startup world grows & matures, the number of outlets for pitching increases every month. There are opportunities in most cities to stand up and pitch your idea to your peers or investors. Start by getting out in front of your local community as often as possible. In the Boulder/Denver community, there are a few companies that I see pitch all the time, and those companies have fantastic pitches because they are constantly practicing, getting feedback, and refining.

Look for meetups that focus on pitching such as 1 Million Cups and House of Genius, or simply do a search for startup pitch meetup in your city. During startup weeks or similar events, search and sign up for pitch practices and competitions. If your co-working space is like SoftLayer partner Galvanize, they might have a big member pitch competition or a peer-to-peer practice event. Participate in as many local and regional pitch competitions as you can find. As long as the competitions don't take a piece of equity or require a significant payment to participate—either of which should be very carefully evaluated beforehand—sign up, and compete. This constant exposure to your local market will help spread the word about your company, provide feedback on your pitch, and maybe even score some prizes!

For more advice on your pitch, read my previous post, Advice from the Catalyst Team: Pitching Like George Lucas.

Maximizing Your Startup Roadshow

Now that you've refined your pitch and practiced in front of as many local audiences as possible, it's time to start planning your roadshow. Traveling on a limited budget means you must plan a highly focused trip with a specific goal in mind. Maybe you're traveling from New York City to Philadelphia for a competition, or from Portland to San Francisco for an investor meeting; no matter the reason, it's imperative to maximize your trip. A good roadshow involves getting the absolute most out of your travel budget, and this means booking meetings with potential investors or customers.

For example, while attending StartSLC, I visited with a friend from Colorado, Ryan Angilly from Ramen. Angilly traveled to Salt Lake City to participate in the pitch competition, but he made the most out of his trip by filling his calendar with investor meetings throughout the week. Before his trip, he reached out to his contacts in the startup community in Utah and asked for introductions. After following through with the contacts, he met with investors he would have otherwise never met.

Start by either allocating a budget for travel or identifying the most important pitch competitions in your region or industry. Once you have your trip scheduled, immediately start looking for connections within your network. It's far more effective to say, "I'll be in town the 12th to the 14th; what does your schedule look like?" than a non-specific request such as, “When are you available?” Look for connections with ties to your local community as they are more likely to be helpful and make intros on your behalf. And ask around locally about who has ties to your destination. Get your meetings lined up, and get ready for a whirlwind of pitches on your first ever startup roadshow.

I'll leave you with this final point: In 2014, venture capital firms raised nearly $33 billion, a 62 percent increase over 2013 levels. They'll spend the next few years investing that money in startups. The money is out there, and you need to do a roadshow to find it.



Subscribe to cloud