Business Posts

August 31, 2015

Data Ingestion and Access Using Object Storage

The massive growth in unstructured data (documents, images, videos, and so on) is one of the greatest problems facing today’s IT personnel. The challenge is storing all the data so that it and its storage solution can grow exponentially. Object storage is an ideal, cost-effective, scale-out solution for storing extensive amounts of unstructured data.

SoftLayer offers object storage based on the OpenStack Swift platform. Object storage provides a fully distributed, scalable, API-accessible storage platform that can be integrated directly into applications. It can be used for storing static data, such as virtual machine (VM) images, photos, emails, and so on. Click here for more information on object storage.

There are two important use cases when working with object storage: data ingestion and data access.

Data ingestion use case
A large medical research company needs to upload a large amount of data into their SoftLayer compute instance. The requirement is for a multi-hundred terabyte image repository that contains hundreds of millions of images. Researchers will then upload code to run on bare metal servers with GPUs to process the images in the repository. The images range from 512KB CT images to 30MB to 50 MB mammograms and are logically grouped into 12 million “studies.” The client wants to onboard the data as quickly as possible.


  • Evenly distribute the objects into approximately 1,000 containers for the initial upload. For the amount of objects the client needs to store, our tests have shown that having a much larger number of containers, or too few objects per container, would incur significant performance penalties. The proposed 1,000 containers allow for a good balance for parallelism in object creation and keeps the container sizes manageable.
  • Concurrently add new objects to all containers using 400 worker threads for small objects (e.g., 512KB CT images) and 40 worker threads for large objects (e.g., 30MB to 50MB mammograms). The ideal number of worker threads is dependent on the workload size. Using a minimal amount of threads results in better response but lower throughput. Using significantly more threads may lower both latency and throughput because the threads start competing for resources.

Data access use case
A large technology company has a mix of GET, PUT, and DELETE operations for which it needs object storage capable of holding billions of small objects (15KB or less). They also want consistent latencies for their operation mix (GET 54%, PUT 33%, and DELETE 13%), which requires optimal tuning for consistent performance. The client’s benchmarking calls for 1,400 operations per second.


  • Use multiple containers (at least 40) to improve the latency for PUT and DELETE objects. As long as the objects are distributed over at least 40 containers with a sufficient number of worker threads, the average latencies for PUT and DELETE objects was well below 100ms in our tests. There may be occasional latency spikes, which are not surprising on shared storage systems, but overall, the latencies should be relatively consistent.
    • The read latency for a GET is very fast—less than 20ms on average for small objects.
  • Use multiple containers if very high throughput is needed. In our tests, we could drive more than 6,000 transactions per second on the production cluster with at least 40 containers.

-Naeem Altaf & Khoa Huynh

June 17, 2015

Through Our Customers’ Eyes

There’s something unique about getting an opinion about a product or service from someone who has actually used that service—it’s part of the reason why the reviews on Amazon and apps like Yelp have become so popular.

We can tell you all day long about all the things the SoftLayer cloud platform is capable of, but wouldn’t it be nice to get real life accounts about real customers who are building real businesses by using it?

The new customer stories page on our website features video and written stories of just that—happy customers who wanted to share their experiences about changing their industries or improving the way they do business by using SoftLayer.

And some of our customers are doing some really, really cool things. Take Sohonet, for example. The company is using the SoftLayer cloud to improve processes in the movie industry. Its private network for processing, storing, and collaborating on media workloads in the cloud has set a new standard for production and post-production work in the media industry. Check it out:

We have many more SoftLayer customers who are also doing cool things. You can read their stories on our new customer stories page.

We think we have some of the most innovative customers in the cloud. If you’re thinking about becoming one of them, take a look around. Then sign up, and maybe you can be our next featured story.


April 27, 2015

Good Documentation: A How-to Guide

As part of my job in Development Support, I write internal technical documentation for employee use only. My department is also the last line of support before a developer is called in for customer support issues, so we manage a lot of the troubleshooting documentation. Some of the documentation I write and use is designed for internal use for my position, but some of it is troubleshooting documents for other job positions within the company. I have a few guidelines that I use to improve the quality of my documentation. These are by no means definitive, but they’re some helpful tips that I’ve picked up over the years.


I’m sure everyone has met the frustration of reading a long-winded sentence that should have been three separate sentences. Keeping your sentences as short as possible helps ensure that your advice won’t go in one ear and out the other. If you can write things in a simpler way, you should do so. The goal of your documentation is to make your readers smarter.

Avoid phrasing things in a confusing way. A good example of this is how you employ parentheses. Sometimes it is necessary to use them to convey important beneficial tidbits to your readers. If you write something with parentheses in it, and you can’t read it out loud without it sounding confusing, try to re-word it, or run it by someone else.

Good: It should have "limited connectivity" (the computer icon with the exclamation point) or "active" status (the green checkmark) and NOT "retired" (the red X).
Bad: It should have the icon “limited connectivity” (basically the computer icon with the exclamation point that appears in the list) (you can see the “limited connectivity” text if you hover over it) or “active” (the green checkmark) status and NOT the red “retired” X icon.

Ideally, you should use the same formatting for all of your documentation. At the very least, you should make your formatting consistent within your document. All of our transaction troubleshooting documentation at SoftLayer uses a standardized error formatting that is consistent and easy to read. Sometimes it might be necessary to break the convention if readability is improved. For example: Collapsible menus make it hard to search the entire page using ctrl+F, but very often, it makes things more difficult.

And finally, if people continually have a slew of questions, it’s probably time to revise your documentation and make it clearer. If it’s too complex, break it down into simpler terms. Add more examples to help clarify things so that it makes sense to your end reader.


Use bullet points or numbered lists when listing things instead of a paragraph block. I mention this because good formatting saves man-hours. There’s a difference between one person having to search a document for five minutes, versus 100 people having to search a document for five minutes each. That’s over eight man-hours lost. Bullet points are much faster to skim through when you are looking for something specific in the middle of a page somewhere. Avoid the “TL;DR” effect and don’t send your readers a wall of text.

Avoid superfluous information. If you have extra information beyond what is necessary, it can have an adverse effect on your readers. Your document may be the first your readers have read on your topic, so don’t overload them with too much information.

Don’t create duplicate information. If your documentation source is electronic, keep your documentation from repeating information, and just link to it in a central location. If you have the same information in five different places, you’ll have to update it in five different places if something changes.

Break up longer documents into smaller, logical sections. Organize your information first. Figure out headings and main points. If your page seems too long, try to break it down into smaller sections. For example, you might want to separate a troubleshooting section from the product information section. If your troubleshooting section grows too large, consider moving it to its own page.


Don’t make assumptions about what the users already know. If it wasn’t covered in your basic training when you were hired, consider adding it to the documentation. This is especially important when you are documenting things for your own job position. Don’t leave out important details just because you can remember them offhand. You’re doing yourself a favor as well. Six months from now, you may need to use your documentation and you may not remember those details.

Bad:SSH to the image server and delete the offending RGX folder.
Good:SSH to the image server (imageserver.mycompany.local), and run ls -al /dev/rgx_files/ | grep blah to find the offending RGX folder and then use rm -rf /dev/rgx_files/<folder> to delete it.

Make sure your documentation covers as much ground as possible. Cover every error and every possible scenario that you can think of. Collaborate with other people to identify any areas you may have missed.

Account for errors. Error messages often give very helpful information. The error might be as straightforward as “Error: You have entered an unsupported character: ‘$.’” Make sure to document the cause and fix for it in detail. If there are unsupported characters, it might be a good idea to provide a list of unsupported characters.

If something is confusing, provide a good example. It’s usually pretty easy to identify the pain points—the things you struggle with are probably going to be difficult for your readers as well. Sometimes things can be explained better in an example than they can in a lengthy paragraph. If you were documenting a command, it might be worthwhile to provide a good example first and then break it down and explain it in detail. Images can also be very helpful in getting your point across. In documenting user interfaces, an image can be a much better choice than words. Draw red boxes or arrows to guide the reader on the procedure.


April 24, 2015

Working Well With Your Employees

In the past 17 years I’ve worked in a clean-room laboratory environment as an in-house tech support person managing windows machines around dangerous lasers and chemicals, in the telecommunications industry as a systems analyst and software engineer, and in the hosting industry as a lead developer, software architect, and manager of development. In every case, the following guiding principles have served me well, both as an employee striving to learn more and be a better contributor and as a manager striving to be a worthy employer of rising talent. Whether you are a manager or a startup CEO, this advice will help you cultivate success for you and your employees.

Hire up.
When you’re starting out, you will likely wear many hats out of necessity, but as your company grows, these hats need to be given to others. Hire the best talent you can, and rely on their expertise. Don’t be intimidated by intelligence—embrace it and don’t let your ego stand in the way. Also, be aware that faulty assumptions about someone’s skill set can throw off deadlines and cause support issues down the road. Empowering people increases a sense of ownership and pride in one’s work.

Stay curious.
IBM has reinvented itself over and over. It has done this to keep up with the ever-changing industry with the help of curious employees. Curious people ask more questions, dig deeper, and they find creative solutions to current industry needs. Don’t pour cold water on your employees who want to do things differently. Listen to them with an open mind. Change is sometimes required, and it comes through innovation by curious people.

Integrate and automate everything.
Take a cue from SoftLayer: If you find yourself performing a repetitive task, automate and document it. We’ve focused on automation since day one. Not only do we automate server provisioning, but we’ve also automated our development build processes so that we can achieve repeatable success in code releases. Do your best to automate yourself out of a job and encourage others to live by this mantra. Don’t trade efficiency for job security—those who excel in this should be given more responsibility.

Peace of mind is worth a lot.
Once a coworker and I applied to contract for a job internally because our company was about to spend millions farming it out to a third party. We knew we could do it faster and cheaper, but the company went with the third party instead. Losing that contract taught me that companies are willing to pay handsomely for peace of mind. If you can build a team that is that source of that peace of mind for your company, you will go far.

When things don’t go right.
Sometimes things go off the rails, and there’s nothing you can do about it. People make mistakes. Deadlines are missed. Contracts fall through. In these situations, it’s important to focus on where the process went wrong and put changes in place to keep it from happening again. This is more beneficial to your team than finger pointing. If you can learn from your mistakes, you will create an environment that is agile and successful.

- Jason

March 30, 2015

The Importance of Data's Physical Location in the Cloud

If top-tier cloud providers use similar network hardware in their data centers and connect to the same transit and peering bandwidth providers, how can SoftLayer claim to provide the best network performance in the cloud computing industry?

Over the years, I've heard variations of that question asked dozens of times, and it's fairly easy to answer with impressive facts and figures. All SoftLayer data centers and network points of presence (PoPs) are connected to our unique global network backbone, which carries public, private, and management traffic to and from servers. Using our network connectivity table, some back-of-the-envelope calculations reveal that we have more than 2,500Gbps of bandwidth connectivity with some of the largest transit and peering bandwidth providers in the world (and that total doesn't even include the private peering relationships we have with other providers in various regional markets). Additionally, customers may order servers with up to 10Gbps network ports in our data centers.

For the most part, those stats explain our differentiation, but part of the bigger network performance story is still missing, and to a certain extent it has been untold—until today.

The 2,500+Gbps of bandwidth connectivity we break out in the network connectivity table only accounts for the on-ramps and off-ramps of our network. Our global network backbone is actually made up of an additional 2,600+Gbps of bandwidth connectivity ... and all of that backbone connectivity transports SoftLayer-related traffic.

This robust network architecture streamlines the access to and delivery of data on SoftLayer servers. When you access a SoftLayer server, the network is designed to bring you onto our global backbone as quickly as possible at one of our network PoPs, and when you're on our global backbone, you'll experience fewer hops (and a more direct route that we control). When one of your users requests data from your SoftLayer server, that data travels across the global backbone to the nearest network PoP, where it is handed off to another provider to carry the data the "last mile."

With this controlled environment, I decided to undertake an impromptu science experiment to demonstrate how location and physical distance affect network performance in the cloud.

Speed Testing on the SoftLayer Global Network Backbone

I work in the SoftLayer office in downtown Houston, Texas. In network-speak, this location is HOU04. You won't find that location on any data center or network tables because it's just an office, but it's connected to the same global backbone as our data centers and network points of presence. From my office, the "last mile" doesn't exist; when I access a SoftLayer server, my bits and bytes only travel across the SoftLayer network, so we're effectively cutting out a number of uncontrollable variables in the process of running network speed tests.

For better or worse, I didn't tell any network engineers that I planned to run speed tests to every available data center and share the results I found, so you're seeing exactly what I saw with no tomfoolery. I just fired up my browser, headed to our Data Centers page, and made my way down the list using the SpeedTest option for each facility. Customers often go through this process when trying to determine the latency, speeds, and network path that they can expect from servers in each data center, but if we look at the results collectively, we can learn a lot more about network performance in general.

With the results, we'll discuss how network speed tests work, what the results mean, and why some might be surprising. If you're feeling scientific and want to run the tests yourself, you're more than welcome to do so.

The Ookla SpeedTests we link to from the data centers table measured the latency (ping time), jitter (variation in latency), download speeds, and upload speeds between the user's computer and the data center's test server. To run this experiment, I connected my MacBook Pro via Ethernet to a 100Mbps wired connection. At the end of each speed test, I took a screenshot of the performance stats:

SoftLayer Network Speed Test

To save you the trouble of trying to read all of the stats on each data center as they cycle through that animated GIF, I also put them into a table (click the data center name to see its results screenshot in a new window):

Data Center Latency (ms) Download Speed (Mbps) Upload Speed (Mbps) Jitter (ms)
AMS01 121 77.69 82.18 1
DAL01 9 93.16 87.43 0
DAL05 7 93.16 83.77 0
DAL06 7 93.11 83.50 0
DAL07 8 93.08 83.60 0
DAL09 11 93.05 82.54 0
FRA02 128 78.11 85.08 0
HKG02 184 50.75 78.93 2
HOU02 2 93.12 83.45 1
LON02 114 77.41 83.74 2
MEL01 186 63.40 78.73 1
MEX01 27 92.32 83.29 1
MON01 52 89.65 85.94 3
PAR01 127 82.40 83.38 0
SJC01 44 90.43 83.60 1
SEA01 50 90.33 83.23 2
SNG01 195 40.35 72.35 1
SYD01 196 61.04 75.82 4
TOK02 135 75.63 82.20 2
TOR01 40 90.37 82.90 1
WDC01 43 89.68 84.35 0

By performing these speed tests on the SoftLayer network, we can actually learn a lot about how speed tests work and how physical location affects network performance. But before we get into that, let's take note of a few interesting results from the table above:

  • The lowest latency from my office is to the HOU02 (Houston, Texas) data center. That data center is about 14.2 miles away as the crow flies.
  • The highest latency results from my office are to the SYD01 (Sydney, Australia) and SNG01 (Singapore) data centers. Those data centers are at least 8,600 and 10,000 miles away, respectively.
  • The fastest download speed observed is 93.16Mbps, and that number was seen from two data centers: DAL01 and DAL05.
  • The slowest download speed observed is 40.35Mbps from SNG01.
  • The fastest upload speed observed is 87.43Mbps to DAL01.
  • The slowest upload speed observed is 72.35Mbps to SNG01.
  • The upload speeds observed are faster than the download speeds from every data center outside of North America.

Are you surprised that we didn't see any results closer to 100Mbps? Is our server in Singapore underperforming? Are servers outside of North America more selfish to receive data and stingy to give it back?

Those are great questions, and they actually jumpstart an explanation of how the network tests work and what they're telling us.

Maximum Download Speed on 100Mbps Connection

If my office is 2 milliseconds from the test server in HOU02, why is my download speed only 93.12Mbps? To answer this question, we need to understand that to perform these tests, a connection is made using Transmission Control Protocol (TCP) to move the data, and TCP does a lot of work in the background. The download is broken into a number of tiny chunks called packets and sent from the sender to the receiver. TCP wants to ensure that each packet that is sent is received, so the receiver sends an acknowledgement back to the sender to confirm that the packet arrived. If the sender is unable to verify that a given packet was successfully delivered to the receiver, the sender will resend the packet.

This system is pretty simple, but in actuality, it's very dynamic. TCP wants to be as efficient as possible ... to send the fewest number of packets to get the entire message across. To accomplish this, TCP is able to modify the size of each packet to optimize it for each communication. The receiver dictates how large the packet should be by providing a receive window to accommodate a small packet size, and it analyzes and adjusts the receive window to get the largest packets possible without becoming unstable. Some operating systems are better than others when it comes to tweaking and optimizing TCP transfer rates, but the processes TCP takes to ensure that the packets are sent and received without error takes overhead, and that overhead limits the maximum speed we can achieve.

Understanding the SNG01 Results

Why did my SNG01 speed test max out at a meager 40.35Mbps on my 100Mbps connection? Well, now that we understand how TCP is working behind the scenes, we can see why our download speeds from Singapore are lower than we'd expect. Latency between the sending and successful receipt of a packet plays into TCP’s considerations of a stable connection. Higher ping times will cause TCP to send smaller packet sizes than it would for lower ping times to ensure that no sizable packet is lost (which would have to be reproduced and resent).

With our global backbone optimizing the network path of the packets between Houston and Singapore, the more than 10,000-mile journey, the nature of TCP, and my computer's TCP receive window adjustments all factor into the download speeds recorded from SNG01. Looking at the results in the context of the distance the data has to travel, our results are actually well within the expected performance.

Because the default behavior of TCP is partially to blame for the results, we could actually tweak the test and tune our configurations to deliver faster speeds. To confirm that improvements can be made relatively easily, we can actually just look at the answer to our third question...

Upload > Download?

Why are the upload speeds faster than the download speeds after latency jumps from 50ms to 114ms? Every location in North America is within 2,000 miles of Houston, while the closest location outside of North America is about 5,000 miles away. With what we've learned about how TCP and physical distance play into download speeds, that jump in distance explains why the download speeds drop from 90.33Mbps to 77.41Mbps as soon as we cross an ocean, but how can the upload speeds to Europe (and even APAC) stay on par with their North American counterparts? The only difference between our download path and upload path is which side is sending and which side is receiving. And if the receiver determines the size of the TCP receive window, the most likely culprit in the discrepancy between download and upload speeds is TCP windowing.

A Linux server is built and optimized to be a server, whereas my MacOSX laptop has a lot of other responsibilities, so it shouldn't come as a surprise that the default TCP receive window handling is better on the server side. With changes to the way my laptop handles TCP, download speeds would likely be improved significantly. Additionally, if we wanted to push the envelope even further, we might consider using a different transfer protocol to take advantage of the consistent, controlled network environment.

The Importance of Physical Location in Cloud Computing

These real-world test results under controlled conditions demonstrate the significance of data's geographic proximity to its user on the user's perceived network performance. We know that the network latency in a 14-mile trip will be lower than the latency in a 10,000-mile trip, but we often don't think about the ripple effect latency has on other network performance indicators. And this experiment actually controls a lot of other variables that can exacerbate the performance impact of geographic distance. The tests were run on a 100Mbps connection because that's a pretty common maximum port speed, but if we ran the same tests on a GigE line, the difference would be even more dramatic. Proof: HOU02 @ 1Gbps v. SNG01 @ 1Gbps

Let's apply our experiment to a real-world example: Half of our site's user base is in Paris and the other half is in Singapore. If we chose to host our cloud infrastructure exclusively from Paris, our users would see dramatically different results. Users in Paris would have sub-10ms latency while users in Singapore have about 300ms of latency. Obviously, operating cloud servers in both markets would be the best way to ensure peak performance in both locations, but what if you can only afford to provision your cloud infrastructure in one location? Where would you choose to provision that infrastructure to provide a consistent user experience for your audience in both markets?

Given what we've learned, we should probably choose a location with roughly the same latency to both markets. We can use the SoftLayer Looking Glass to see that San Jose, California (SJC01) would be a logical midpoint ... At this second, the latency between SJC and PAR on the SoftLayer backbone is 149ms, and the latency between SJC and SNG is 162ms, so both would experience very similar performance (all else being equal). Our users in the two markets won't experience mind-blowing speeds, but neither will experience mind-numbing speeds either.

The network performance implications of physical distance apply to all cloud providers, but because of the SoftLayer global network backbone, we're able to control many of the variables that lead to higher (or inconsistent) latency to and from a given data center. The longer a single provider can route traffic, the more efficiently that traffic will move. You might see the same latency speeds to another provider's cloud infrastructure from a given location at a given time across the public Internet, but you certainly won't see the same consistency from all locations at all times. SoftLayer has spent millions of dollars to build, maintain, and grow our global network backbone to transport public and private network traffic, and as a result, we feel pretty good about claiming to provide the best network performance in cloud computing.


March 23, 2015

Redefining the Startup Accelerator Business Model: An Interview with HIGHLINE’S Marcus Daniels

In this interview, SoftLayer’s community development lead in Canada, Qasim Virjee, sits down with Marcus Daniels, the co-founder and CEO of HIGHLINE, a venture-backed accelerator based in Vancouver and Toronto.

QV: Y Combinator has become an assumed standard for accelerators by creating its own business model. What do you think is both good and bad about this?

MD: Y Combinator (YC) not only created a new model for funding tech startups, but it also evolved the whole category. Historically, I like to think that Bill Gross's Idealab represented accelerator/incubator 1.0 and YC evolved that to 2.0 over the past decade, resulting in a hit parade of meaningful startups that are changing the world.

The good is that YC has created a “high quality” bar and led the standardization of micro-seed investment docs for the betterment of the whole startup ecosystem. It proved the model and has helped hundreds of amazing founders with venture profile businesses that are changing the world.

The bad is that there are now thousands of accelerators/incubators globally running generic programs that don't help founders much. More than half have a horrible rate helping startups raise follow-on capital and almost all never had a single exit from a startup they invested in.

HIGHLINE has a strong track record in our short history and now sees a big opportunity to be amongst the leaders in the evolution of the accelerator industry.

QV: Many accelerators focus on streamlining a program to process cohorts of companies at regular intervals throughout the year, every year. Often, the high throughput these programs expect means they must select companies from applications, rather than the approach you seem to be taking. Can you explain how HIGHLINE is sourcing companies for investment?

MD: HIGHLINE gets over 800 applications a year and targets about 20–30 investments during that time. Out of our last 12 investments, all had either come from referral partners or the team hunting the best founders to be part of our portfolio. Over the years, we have moved from the ideation stage, which comprises the majority of inbound applications, to the MVP in market stage, which is our sweet spot now. We will also focus on low-volume, high-touch advisory support, which is why a lot of time is spent building relationships with founders and adding value to MVP-stage startups before investing helps curate better deals.

QV: Traditionally, investment vehicles (such as VC firms and accelerator programs) have been run by financial industry types, but it seems that you are taking a more entrepreneurial approach with HIGHLINE and constantly evolving your business model. What can you tell me about this?

MD: The best accelerator leaders globally are past entrepreneurs who have some investment experience given how hands-on you have to be with the companies. Without the experience of starting and growing ventures, it is really hard to help tech founders navigate the daily challenges. Also, the best founders get to choose, and they want to work with other top founders in a long-term mentor/advisory/coaching relationship.

QV: How does being “VC-backed” differentiate HIGHLINE from other accelerators?

MD: Having several VCs as investors, such as the BDC and Relay Ventures, gives us an edge in several ways. Firstly, they are not only a great quality referral network for deals, but also a huge help in getting our companies venture-ready—even if they may not invest directly. Secondly, they allow us to internally focus on a specialization in helping venture profile businesses raise follow-on capital, as opposed to the glut of programs that are optimized for entrepreneurial education and lifestyle job creation. Lastly, they put big pressure on the whole HIGHLINE team to both get results for shareholders and build something unique that can be a category leader over the next decade.

QV: Our country is physically large and this seems to have created differentiated tech startup scenes between its cities. How does HIGHLINE collapse the geographic divide by having a physical presence in both Vancouver and Toronto?

MD: HIGHLINE tries to curate and unite the best digital founders, institutional investors, and ecosystem partners across Canada. We position our offices in both Vancouver and Toronto as portfolio hubs for founders who want to be headquartered in Canada, but want to take on the world. Most importantly, we spend time in all major Canadian startup ecosystems and have plans for unique events to bring our curated community closer together.

- Qasim

March 18, 2015

SoftLayer, Bluemix and OpenStack: A Powerful Combination

Building and deploying applications on SoftLayer with Bluemix, IBM’s Platform as a Service (PaaS), just got a whole lot more powerful. At IBM’s Interconnect, we announced a beta service for deploying OpenStack-based virtual servers within Bluemix. Obviously, the new service is exciting because it brings together the scalable, secure, high-performance infrastructure from SoftLayer with the open, standards-based cloud management platform of OpenStack. But making the new service available via Bluemix presents a particularly unique set of opportunities.

Now Bluemix developers can deploy OpenStack-based virtual servers on SoftLayer or their own private OpenStack cloud in a consistent, developer-friendly manner. Without changing your code, your configuration, or your deployment method, you can launch your application to a local OpenStack cloud on your premises, a private OpenStack cloud you have deployed on SoftLayer bare metal servers, or to SoftLayer virtual servers within Bluemix. For instance, you could instantly fire up a few OpenStack-based virtual servers on SoftLayer to test out your new application. After you have impressed your clients and fully tested everything, you could deploy that application to a local OpenStack cloud in your own data center ̶all from within Bluemix. With Bluemix providing the ability to deploy applications across cloud deployment models, developers can create an infrastructure configuration once and deploy consistently, regardless of the stage of their application development life cycle.

OpenStack-based virtual servers on SoftLayer enable you to manage all of your virtual servers through standard OpenStack APIs and user interfaces, and leverage the tooling, knowledge and process you or your organization have already built out. So the choice is yours: you may fully manage your virtual servers directly from within the Bluemix user interface or choose standard OpenStack interface options such as the Horizon management portal, the OpenStack API or the OpenStack command line interface. For clients who are looking for enterprise-class infrastructure as a service but also wish to avoid getting locked in a vendor’s proprietary interface, our new OpenStack standard access provides clients a new choice.

Providing OpenStack-based virtual servers is just one more (albeit major) step toward our goal of providing even more OpenStack integration with SoftLayer services. For clients looking for enterprise-class Infrastructure as a Service (IaaS) available globally and accessible via standard OpenStack interfaces, OpenStack-based virtual servers on SoftLayer provide just what they are looking for.

The beta is open now for you to test deploying and running servers on the new SoftLayer OpenStack public cloud service through Bluemix. You can sign up for a Bluemix 30-day free trial.

- @marcalanjones

February 19, 2015

Get Ready to Connect with SoftLayer – IBM InterConnect 2015

This year IBM is taking three amazing conferences and merging them into IBM InterConnect. With all the activity going on over the five days, the search for SoftLayer can be a serious undertaking. So spend more time enjoying the conference and less time flipping through your event guide. Here’s a rundown of everything you need to know to keep up with us.

SLayer Sessions at IBM InterConnect

SLayers are leading sessions all over InterConnect. We've cut out all the noise so it’s easy for you to slip our sessions into your conference agenda. What do you need to know? You’ll find it here.

DRD-5144A: Create an Auto-Scaling Server Deployment Using the SoftLayer API, Docker and SaltStack Lab
Phil Jackson, Lead Technology Evangelist (+ other speakers)
Monday, February 23 @ 1:00pm — MGM Grand, Room 304
CIS-5363A: SoftLayer 101 Plus: Understanding How to Build and Scale on the World’s Most Powerful IaaS Platform
Marc Jones, CTO
Monday, February 23 @ 2:00pm — Mandalay Bay, Breakers A
CIS-3427A: SoftLayer Storage Services Overview
Michael Fork, Product Manager, Strategy
Monday, February 23 @ 3:00pm — Mandalay Bay Cloud Infrastructure Engagement Center
SoftLayer’s Experts and Edibles Reception
Phil Jackson, Lead Technology Evangelist; Chris Gallo, Technology Evangelist; Jack Beech, VP of Business Development; Harold Smith, Director of Sales Engineering; Jerry Gutierrez, Sales Engineer
Monday, February 23 @ 4:30pm — Mandalay Bay Eco D Cafe, Booth 120, Solutions Center
CIS-5372A: Tips, Tricks and Planning for Building an Enterprise-Grade Cloud
Harold Hannon, Sr. Software Architect
Tuesday, February 24 @ 9:30am — Mandalay Bay, Breakers A
CIT-5983A: Meet the Experts on Hybrid Cloud with IBM Systems and SoftLayer
Michael Fork, Product Manager, Strategy & Frank Degilio, IBM
Tuesday, February 24 @ 5:00pm — Mandalay Bay, Meet the Experts Forum #3
CDP-3464A: SoftLayer Object Storage Deep-Dive
Michael Fork, Product Manager, Strategy & Ann Corrano, IBM
Wednesday, February 25 @ 8:00am — Mandalay Bay, Breakers I
CIS-5375A: Single Serving Servers: An In-Depth Look at Making Your Infrastructure Disposable
Christopher Gallo, Developer Advocate
Wednesday, February 25th @ 9:30am — Mandalay Bay, Breakers A
DRD-3765A: Using the SoftLayer API to Create and Manage Your Cloud Lab
Phil Jackson, Lead Technology Evangelist (+ other speakers)
Wednesday, February 25 @ 11:00am — MGM Grand, Room 304
CIS-5379A: Application Development on the Cloud: Picking the Right IaaS Platform
Phil Jackson, Lead Technology Evangelist
Wednesday, February 25th @ 2:00pm — Mandalay Bay, Breakers A
CGS-6100A: Day 3 General Session: A New Way Forward
Marc Jones, CTO (+ other speakers)
Wednesday, February 25th @ 3:30pm — Mandalay Bay Ballroom
CIS-5373A: How to Leverage Big Data Solutions on SoftLayer’s Infrastructure-as-a-Service Platform
Harold Hannon, Sr. Software Architect
Wednesday, February 25th @ 5:30pm — Mandalay Bay, Breakers A


If you’re looking for developer-focused topics within IBM Interconnect, we’ve got you covered. dev@InterConnect is a developer’s two-day dreamland—from a slate of developer-focused sessions to firsthand training, and even a Developer Playground where you’ll get to play with some of the hottest tech toys. As an added bonus, you will find the Server Challenge there too. Try your hand at re-racking the servers and plugging in the cables—fastest time wins a MacBook Air.

In between all of that, make a note to stop at these SLayer sessions:

DEV-6652A: Developing with Softlayer
Phil Jackson, Lead Technology Evangelist
Tuesday, February 24 @ 10:00am — MGM Grand, Room 319
DEV-6654A: Bring Agile to Deployments
Christopher Gallo, Developer Advocate
Tuesday, February 24 @ 10:45am — MGM Grand, Room 319
DEV-6653A: Software for the Cloud with the SoftLayer Cloud
Harold Hannon, Sr. Software Architect
Tuesday, February 24 @ 11:30am — MGM Grand, Room 319

End dev@InterConnect with a bang at the Gaming Bash we are sponsoring with Cloudant. Join us for bites, beverages, and be ready to game. Prizes and swag will be up for grabs; you just have to put your skills to the test.

Tuesday, February 24, 2015
05:30 PM - 07:30 PM
MGM Grand, Conference Center Premier Ballroom 312/317

IBM Cloud Experience Zone

If you find yourself with some free time at Mandalay Bay, swing into the Solution EXPO and make a b-line for the IBM Cloud Experience Zone. That’s where you’ll find your resource for all things SoftLayer. If you have questions about SoftLayer, our SLayers will be there to answer them. If you just want to see what we’re all about, we’ll be there running live demos.

Rock @ IBM InterConnect

After a packed conference, we hope you’ll be ready to rock! IBM InterConnect and Rocket are giving attendees a VIP worthy event with a performance from Aerosmith.

Go to the MGM Grand Garden Arena on Wednesday evening to party from 7:45–10:30pm. The event is included for InterConnect and dev@InterConnect attendees. Just don’t forget to bring your badge; it’s your ticket in!

We look forward to seeing you next week in Las Vegas!


February 2, 2015

#SLCloudLove: Growing an e-Commerce Business On The Cloud

Editor’s Note: Each month in 2015, we’ll be celebrating the cornucopia of reasons why the cloud reigns supreme — from customer tales to cloud insights and everything in between. During February, the notorious month of love, we’re showing you exactly why we heart the cloud. Follow all the fun on your favorite social networks by keeping tabs on #SLCloudLove.

Clicking Add to Cart—that’s how I like to shop these days. Brick and mortar shopping might be retail therapy, but the convenience and online discounts at my fingertips appeases my inherently lazy human tendencies.

With more and more online e-stores cropping up, physical retail outlets can no longer ignore not having an online presence, including a mobile-friendly website and ordering system. The numbers say it all:

  • e-Commerce sales are expected to be more than $1.7 trillion with mobile commerce accounting for nearly $300 billion in sales. Read more here.
  • In India, the e-commerce market is expected to reach $6 billion in 2015—a 70 percent increase over 2014. Read more here.
  • The Chinese government is allowing foreign-owned e-commerce companies to operate in the Shanghai Free Trade Zone as part of a pilot program; the market is expected to see a lot of inflow despite tough competition from local giants like and Alibaba. Read more here.
  • The six largest Southeast Asian countries (Indonesia, Malaysia, Philippines, Singapore, Thailand, and Vietnam) reached $7 billion in total revenues in 2013 and will grow at a CAGR of 37.6 percent to reach $34.5 billion by 2018. Read more here.

So when I recently attended the iMedia Online Retail Summit, I jumped at the chance to discuss with the audience the benefits of moving their e-Commerce business to the cloud as well as discussing some very interesting stories about e-commerce platforms based in Asia.

Here is a quick overview of the presentation:

e-Commerce on Cloud
There is no denying the high reliance on IT. e-Commerce portals need to handle a rising number of Internet users, provide a secure and convenient online payment system, and support lucrative offers by e-tailors. The problem is that the utilization is unpredictable (except holiday season when it is predictably unpredictable!). If your site slows or freezes, especially during a sale, it can be compared to shutting your store on Black Friday. Customers will abandon their carts, and the social media sites will erupt with negative remarks—recall the recent headliner, Flipkart faces social media backlash over ‘crashes’, ‘misleading’ pricing.

The dilemma: Over-allocate and over-pay for unused resources just to manage sudden shopping spurts, or under-allocate resources and suffer the wrath of the new-age shopper. Cloud resources seems like a natural solution when you don’t want to be stuck in the either-or situation. But, not just any cloud solution will do. If a provider has a lock-in period or contract (even if it’s short-term)—well that's not really cloud, now is it?

Similarly the cloud solution is not justifying your investments if it is going to charge you every time you, as an internal user, try to move your virtual servers across your operating geos to get closer to customers. For example: your next online sale is targeted at holiday shoppers in Singapore or you want to carry out test runs for your Amsterdam customer base, but your core virtual server originally resides in Melbourne.

Solving e-Commerce Challenges with SoftLayer
I like using this image as it gives a great view into how SoftLayer can help e-commerce and e-tail customers manage day-to-day scenarios. From seasonal site traffic spikes to needing backup solutions for business continuity, SoftLayer has a solution for it. Plus SoftLayer brings advantages gleaned from working with e-commerce giants over the past decade.

Walking the Talk—Businesses that are Leveraging Cloud . . . Successfully!
In October 2014, Natali Ardianto,'s CTO, gave a keynote address at Cloud Expo Asia about building one of Indonesia’s largest online travel and entertainment portals. When it first launched a few years ago, faced TCP, DoS, and DDoS attacks while hosting unsuccessfully on two different IaaS providers. The company needed a highly stable infrastructure delivering consistent performance and reliable support to ensure site uptime and a smooth end-user experience. chose SoftLayer to support its site. Running on SoftLayer bare metal servers, systems are now able to handle more than 300 API requests per minute and has experienced a 75 percent cost savings. Watch Natali's video where he discusses his cloud experience, or read the detailed case study. is an impressive collection of over 5 million real-time international hotel deals, a database of more than 800,000 properties and an affiliate base of over 20,000 companies. The company uses a combination of SoftLayer bare metal and virtual servers, load balancers, and redundant iSCSI storage. This provides the company with several thousand cores of processing power and enables it to remain lean and move quickly. The company also uses the SoftLayer infrastructure to provide real-time predictive models to the website and to support its business intelligence tools. Read the detailed case study.

Photo credits @iMediaSummit

While at the conference, I met up with a great bunch of entrepreneurs, startups and giants from across Asia. It was amazing to hear about the journey and growth plans of Rakuten, Life Project, Qoo10, Telunjuk,, and many more. Keep your ears open this coming year. The e-commerce landscape is rapidly progressing and these guys are weaving the fabric.


–Namrata (Connect with me on LinkedIn or, Twitter)

January 27, 2015

Hello, IBM Bluemix!

Developers, if you'd prefer to focus on building new applications instead of customizing your own unique cloud infrastructure, IBM Bluemix provides building blocks to rapidly develop and deploy applications on the Platform as a Service (PaaS) level to make life easier for you. It’s an ecosystem of services based on Cloud Foundry, an open source project designed to make deploying and scaling an application as simple as possible. Leveraging an existing project like this is a large part of what makes Bluemix so easy to use.

Bluemix integrates with Jazz, IBM’s DevOps service, to help manage code, plan versions and release, and actually push code to production. You can still use it with your github projects, so no worries there.

And as a SoftLayer customer (or potential customer), you can rest assured that Bluemix projects can run on SoftLayer’s hardware and network.

Core Ideas

The Application
This is your code. Bluemix comes with a number of predefined buildpacks to get your language of choice up and running quickly, but you will still need to actually develop your application. Bluemix hasn’t solved that problem yet.
A buildpack is a collection of scripts designed to set up your container and all of the application dependencies. If Bluemix doesn’t have a buildpack that suits your needs, you can always create your own. Extending a buildpack is pretty easy. Simply clone an existing one to use as a base, make your changes, commit it to your github repo, and then tell Bluemix about it so it can build your application properly.
Bluemix has a long list of services you can bind to your application. Instead of making a MySQL server yourself, you can just bind the MySQL service to your application and start coding. Along with many of the standard services expected from a CloudFoundry project, there are also some IBM specific ones, like Watson as a service. While I haven’t had the time to learn about Watson personally, everyone I talk to says it’s a rather neat thing to have on your application.

Getting Started

I recommend reading this tutorial which will get you to a nice “hello world” application. Overall I found that going from “I have no idea what Bluemix is” to “I’ve created my own Bluemix application!” to be a rather pleasant experience.

Creating your first Bluemix project is only a few clicks away. A Bluemix 30 day free trial should give you plenty of time to get an idea if Bluemix is the right fit for you.

Bluemix is absolutely worth checking out. So, what are you waiting for? Give it a go!

- Chris

Subscribe to business