Softlayer Posts

May 12, 2015

The SLayer Standard Vol. 1, No. 12

The week in review. All the IBM Cloud and SoftLayer headlines in one place.

We've got the power
What makes an existing partnership better? More power, of course. IBM and SAP strengthened the bond by adding a new set of integrated Power Systems solutions for SAP HANA in-memory computer applications: POWER8 servers. Welcome to a new era of high speed, high volume data processing.

Straight from the horse’s mouth
On the subject of IBM’s cloudy future, Forbes sat down with none other than Robert LeBlanc, SVP of IBM’s Cloud Business, to clear the haze. Ambition, AWS envy, and giving up on the public cloud? It’s all there.

Friending Facebook
If your company could target the right folks on Facebook, would it be interested? That’s what IBM’s latest ad partnership with the social network is all about. A write-up in Fast Company provides all the details behind the cooperative, which is aimed to "more accurately identify which of [a company’s] customers are among the 1.44 billion people active on Facebook.” After all, learning to leverage the social web just makes sense.

We’re so happy for you
When big things happen for our customers, we love to highlight them. Longtime IBM business partner Manhattan Associates chose IBM Cloud as a preferred cloud provider for its clients (which includes tech support for those running their applications on SoftLayer). And Distribution Central is now offering its 1,000 resellers access to AWS, Azure and IBM Cloud’s SoftLayer cloud services through a single interface. Way to go, everyone.

No autographs, please!
Oh, and it’s come to our attention that we were mentioned on the latest episode of HBO’s Silicon Valley. Although the scenario in which we were mentioned wasn't quite factually accurate, being famous looks good on us, if we do say so ourselves. Now if you’ll excuse us, we’re going to inquire into our star on the Hollywood Walk of Fame.

-Fayza

Categories: 
April 29, 2015

The SLayer Standard Vol. 1, No. 11

The week in review. All the IBM Cloud and SoftLayer headlines in one place.

Q1
A recent study deemed SoftLayer the top-mentioned hosting provider for cloud services among 50 percent of IT decision makers. This news comes on the heels of IBM’s first quarter earnings report, announcing a 75 percent increase in cloud revenue (with yearly revenue at $7.7 billion). Forbes explains IBM’s rise to power over the competition in “Move Over Amazon, IBM Can Also Claim Top Spot In Cloud Services.” Additionally, Mark Jones, SoftLayer’s chief technology officer, gave details to CRN on how IBM expects to stay on top of the cloud competition by offering pricing benefits over its market-leading rivals.

SoftLayer opens data center in The Netherlands…again.
Last week, in an effort to continue delivering on our promise to expand data centers worldwide, SoftLayer opened a second data center in the Netherlands—just outside Amsterdam in Almere. “The new facility demonstrates the demand and success IBM Cloud is having at delivering high-value services right to the doorstep of our clients,” said James Comfort, IBM cloud services general manager.

Building Applications in the Cloud with SoftLayer
For those who enjoy broadcast over print, our lead technology evangelist, Phil Jackson, sat down with Jacob Goldstein of Wireframes to discuss how to choose the right servers for your needs. Listen to the podcast.

-JRL

Categories: 
April 20, 2015

The SLayer Standard Vol. 1, No. 10

The week in review. All the IBM Cloud and SoftLayer headlines in one place.

The Battle for Global Market Share
Warmer weather must be around the corner—or it could just be the cloud industry heating up. How will cloud providers profit as more and more providers push for world domination? The Economist predicts an industry change as prices drop.

IBM Partners with TI on Secure APIs for IoT
Allow me to translate: the International Business Machines Corporation is partnering with Texas Instruments to secure application program interfaces with the help of the Internet of Things. Through its collaboration with TI, IBM will create a Secure Registry Service that will provide trust and authentication practices and protocol across the value chain–from silicon embedded in devices and products to businesses and homes.

(Join the conversation at #IoTNow or #IoT.)

The U.S. Army Goes Hybrid
The U.S. Army is hoping to see a 50 percent cost savings by utilizing IBM cloud services and products. Like many customers, the Army opted for a hybrid solution for security, flexibility, and ease of scale. Read more about what IBM Cloud and SoftLayer are doing for the U.S Army and other U.S. government departments.

The Only Constant is Change
Or so said Heraclitus of Ephesus. And to keep up with the changing times, IBM has reinvented itself over and over again to stay relevant and successful. This interesting read discusses why big corporations just aren't what they used to be, what major factors have transformed the IT industry over the last couple of decades, and how IBM has been leading the change, time-after-time.

-JRL

Categories: 
April 10, 2015

The SLayer Standard Vol. 1, No. 9

The week in review. All the IBM Cloud and SoftLayer headlines in one place.

Welcome to the Masters
If you’re not practicing your swing this weekend, you’re watching the Masters. Over the next couple of days, professional golfers will seek their shot at landing the coveted Green Jacket. And while everyone might be watching the leaderboard, IBM will be hard at work in what they are calling the “bunker,” located in a small green building at the Augusta National Golf Club.

What does IBM have to do with the Masters? Everything.

Read how IBM, backed by the power of the SoftLayer cloud, is making the Masters website virtually uncrashable.

And for those that can’t line the greens to watch your favorite player, IBM is utilizing the lasers the Golf Club has placed around the course to track the ball as it flies from hole-to-hole. Learn more about the golf-ball tracking technology here.

Open Happiness
In a move to streamline tech operations and cut costs, Coca-Cola Amatil is partnering with IBM Cloud to move some of its platforms to SoftLayer data centers in Sydney and Melbourne—a deal sure to open happiness.

"The move to SoftLayer will provide us with a game-changing level of flexibility, resiliency and reliability to ramp up and down capacity as needed. It will also remove the need for large expenditure on IT infrastructure." - Barry Simpson, CIO, Coca-Cola Amatil

Read more about the new CCA cloud environment and the five-year, multimillion-dollar deal.

-JRL

Categories: 
April 1, 2015

The SLayer Standard Vol. 1 No. 8

The week in review. All the IBM Cloud and SoftLayer headlines in one place.

Sunny Skies for IBM Cloud and The Weather Company
IBM made big headlines on Tuesday when it announced they would team up with The Weather Company boasting “100 percent chance of smarter business forecasts.”

Bloomberg sits down with Bob Picciano, IBM Analytics Senior VP, and David Kenny, The Weather Company CEO to discuss what makes this different than other companies that have analyzed the weather in the past. Using Watson Analytics and the Internet of Things, the partnership will transform business decision-making based on weather behavior. Read how IBM’s $3 billion investment in the Internet of Things will collect weather data from 100,000 weather stations around the world and turn it into meaningful data for business owners.

Indian Startups Choose SoftLayer
According to the National Association of Software and Services Companies (NASSCOM), India has the world’s third largest and the fastest-growing startup ecosystem. Like many SoftLayer startup customers, Goldstar Healthcare, Vtiger, Clematix, Ecoziee Marketing utilize the SoftLayer cloud infrastructure platform to “begin on a small scale and then expand rapidly to meet workload demands without having to worry about large investments in infrastructure development.”

New SoftLayer Storage Offerings
Last week, SoftLayer announced the launch of block storage and file storage complete with Endurance- and Performance-class tiers. The media was fast to report the new offerings that provide customers more choice, flexibility, and control for their storage needs and workloads.

“ … SoftLayer’s focus on tailored capacity and performance needs coincides with the trend in the cloud market of customizing technology based on different application requirements.”– IBM Splits SoftLayer Cloud Storage Into Endurance, Performance Tiers

“In the age of the cloud, the relationship between cloud storage capacity and I/O performance has officially become divorced.” – IBM Falls Into Cloud Storage Pricing Line

Pick your favorite online tech media and read all about it: SiliconANGLE, Computer Weekly, Data Center Knowledge, CRN, V3, Cloud Computing Intelligence, Storage Networking Solutions UK, and DCS Europe.

#IBMandTwitter
There are more than half a billion tweets posted to Twitter every day. IBM is teaming up with Twitter to turn those “tweets into insights for more than 100 organizations around the world.” Leon Sun of The Motley Fool takes a closer look at what the deal means to IBM and Twitter.

“Twitter provides a powerful new lens through which to look at the world. This partnership, drawing on IBM’s leading cloud-based analytics platform, will help clients enrich business decisions with an entirely new class of data. This is the latest example of how IBM is reimaging work.” – Ginni Romety, IBM Chairman, President and CEO

-JRL

Categories: 
March 30, 2015

The Importance of Data's Physical Location in the Cloud

If top-tier cloud providers use similar network hardware in their data centers and connect to the same transit and peering bandwidth providers, how can SoftLayer claim to provide the best network performance in the cloud computing industry?

Over the years, I've heard variations of that question asked dozens of times, and it's fairly easy to answer with impressive facts and figures. All SoftLayer data centers and network points of presence (PoPs) are connected to our unique global network backbone, which carries public, private, and management traffic to and from servers. Using our network connectivity table, some back-of-the-envelope calculations reveal that we have more than 2,500Gbps of bandwidth connectivity with some of the largest transit and peering bandwidth providers in the world (and that total doesn't even include the private peering relationships we have with other providers in various regional markets). Additionally, customers may order servers with up to 10Gbps network ports in our data centers.

For the most part, those stats explain our differentiation, but part of the bigger network performance story is still missing, and to a certain extent it has been untold—until today.

The 2,500+Gbps of bandwidth connectivity we break out in the network connectivity table only accounts for the on-ramps and off-ramps of our network. Our global network backbone is actually made up of an additional 2,600+Gbps of bandwidth connectivity ... and all of that backbone connectivity transports SoftLayer-related traffic.

This robust network architecture streamlines the access to and delivery of data on SoftLayer servers. When you access a SoftLayer server, the network is designed to bring you onto our global backbone as quickly as possible at one of our network PoPs, and when you're on our global backbone, you'll experience fewer hops (and a more direct route that we control). When one of your users requests data from your SoftLayer server, that data travels across the global backbone to the nearest network PoP, where it is handed off to another provider to carry the data the "last mile."

With this controlled environment, I decided to undertake an impromptu science experiment to demonstrate how location and physical distance affect network performance in the cloud.

Speed Testing on the SoftLayer Global Network Backbone

I work in the SoftLayer office in downtown Houston, Texas. In network-speak, this location is HOU04. You won't find that location on any data center or network tables because it's just an office, but it's connected to the same global backbone as our data centers and network points of presence. From my office, the "last mile" doesn't exist; when I access a SoftLayer server, my bits and bytes only travel across the SoftLayer network, so we're effectively cutting out a number of uncontrollable variables in the process of running network speed tests.

For better or worse, I didn't tell any network engineers that I planned to run speed tests to every available data center and share the results I found, so you're seeing exactly what I saw with no tomfoolery. I just fired up my browser, headed to our Data Centers page, and made my way down the list using the SpeedTest option for each facility. Customers often go through this process when trying to determine the latency, speeds, and network path that they can expect from servers in each data center, but if we look at the results collectively, we can learn a lot more about network performance in general.

With the results, we'll discuss how network speed tests work, what the results mean, and why some might be surprising. If you're feeling scientific and want to run the tests yourself, you're more than welcome to do so.

The Ookla SpeedTests we link to from the data centers table measured the latency (ping time), jitter (variation in latency), download speeds, and upload speeds between the user's computer and the data center's test server. To run this experiment, I connected my MacBook Pro via Ethernet to a 100Mbps wired connection. At the end of each speed test, I took a screenshot of the performance stats:

SoftLayer Network Speed Test

To save you the trouble of trying to read all of the stats on each data center as they cycle through that animated GIF, I also put them into a table (click the data center name to see its results screenshot in a new window):

Data Center Latency (ms) Download Speed (Mbps) Upload Speed (Mbps) Jitter (ms)
AMS01 121 77.69 82.18 1
DAL01 9 93.16 87.43 0
DAL05 7 93.16 83.77 0
DAL06 7 93.11 83.50 0
DAL07 8 93.08 83.60 0
DAL09 11 93.05 82.54 0
FRA02 128 78.11 85.08 0
HKG02 184 50.75 78.93 2
HOU02 2 93.12 83.45 1
LON02 114 77.41 83.74 2
MEL01 186 63.40 78.73 1
MEX01 27 92.32 83.29 1
MON01 52 89.65 85.94 3
PAR01 127 82.40 83.38 0
SJC01 44 90.43 83.60 1
SEA01 50 90.33 83.23 2
SNG01 195 40.35 72.35 1
SYD01 196 61.04 75.82 4
TOK02 135 75.63 82.20 2
TOR01 40 90.37 82.90 1
WDC01 43 89.68 84.35 0

By performing these speed tests on the SoftLayer network, we can actually learn a lot about how speed tests work and how physical location affects network performance. But before we get into that, let's take note of a few interesting results from the table above:

  • The lowest latency from my office is to the HOU02 (Houston, Texas) data center. That data center is about 14.2 miles away as the crow flies.
  • The highest latency results from my office are to the SYD01 (Sydney, Australia) and SNG01 (Singapore) data centers. Those data centers are at least 8,600 and 10,000 miles away, respectively.
  • The fastest download speed observed is 93.16Mbps, and that number was seen from two data centers: DAL01 and DAL05.
  • The slowest download speed observed is 40.35Mbps from SNG01.
  • The fastest upload speed observed is 87.43Mbps to DAL01.
  • The slowest upload speed observed is 72.35Mbps to SNG01.
  • The upload speeds observed are faster than the download speeds from every data center outside of North America.

Are you surprised that we didn't see any results closer to 100Mbps? Is our server in Singapore underperforming? Are servers outside of North America more selfish to receive data and stingy to give it back?

Those are great questions, and they actually jumpstart an explanation of how the network tests work and what they're telling us.

Maximum Download Speed on 100Mbps Connection

If my office is 2 milliseconds from the test server in HOU02, why is my download speed only 93.12Mbps? To answer this question, we need to understand that to perform these tests, a connection is made using Transmission Control Protocol (TCP) to move the data, and TCP does a lot of work in the background. The download is broken into a number of tiny chunks called packets and sent from the sender to the receiver. TCP wants to ensure that each packet that is sent is received, so the receiver sends an acknowledgement back to the sender to confirm that the packet arrived. If the sender is unable to verify that a given packet was successfully delivered to the receiver, the sender will resend the packet.

This system is pretty simple, but in actuality, it's very dynamic. TCP wants to be as efficient as possible ... to send the fewest number of packets to get the entire message across. To accomplish this, TCP is able to modify the size of each packet to optimize it for each communication. The receiver dictates how large the packet should be by providing a receive window to accommodate a small packet size, and it analyzes and adjusts the receive window to get the largest packets possible without becoming unstable. Some operating systems are better than others when it comes to tweaking and optimizing TCP transfer rates, but the processes TCP takes to ensure that the packets are sent and received without error takes overhead, and that overhead limits the maximum speed we can achieve.

Understanding the SNG01 Results

Why did my SNG01 speed test max out at a meager 40.35Mbps on my 100Mbps connection? Well, now that we understand how TCP is working behind the scenes, we can see why our download speeds from Singapore are lower than we'd expect. Latency between the sending and successful receipt of a packet plays into TCP’s considerations of a stable connection. Higher ping times will cause TCP to send smaller packet sizes than it would for lower ping times to ensure that no sizable packet is lost (which would have to be reproduced and resent).

With our global backbone optimizing the network path of the packets between Houston and Singapore, the more than 10,000-mile journey, the nature of TCP, and my computer's TCP receive window adjustments all factor into the download speeds recorded from SNG01. Looking at the results in the context of the distance the data has to travel, our results are actually well within the expected performance.

Because the default behavior of TCP is partially to blame for the results, we could actually tweak the test and tune our configurations to deliver faster speeds. To confirm that improvements can be made relatively easily, we can actually just look at the answer to our third question...

Upload > Download?

Why are the upload speeds faster than the download speeds after latency jumps from 50ms to 114ms? Every location in North America is within 2,000 miles of Houston, while the closest location outside of North America is about 5,000 miles away. With what we've learned about how TCP and physical distance play into download speeds, that jump in distance explains why the download speeds drop from 90.33Mbps to 77.41Mbps as soon as we cross an ocean, but how can the upload speeds to Europe (and even APAC) stay on par with their North American counterparts? The only difference between our download path and upload path is which side is sending and which side is receiving. And if the receiver determines the size of the TCP receive window, the most likely culprit in the discrepancy between download and upload speeds is TCP windowing.

A Linux server is built and optimized to be a server, whereas my MacOSX laptop has a lot of other responsibilities, so it shouldn't come as a surprise that the default TCP receive window handling is better on the server side. With changes to the way my laptop handles TCP, download speeds would likely be improved significantly. Additionally, if we wanted to push the envelope even further, we might consider using a different transfer protocol to take advantage of the consistent, controlled network environment.

The Importance of Physical Location in Cloud Computing

These real-world test results under controlled conditions demonstrate the significance of data's geographic proximity to its user on the user's perceived network performance. We know that the network latency in a 14-mile trip will be lower than the latency in a 10,000-mile trip, but we often don't think about the ripple effect latency has on other network performance indicators. And this experiment actually controls a lot of other variables that can exacerbate the performance impact of geographic distance. The tests were run on a 100Mbps connection because that's a pretty common maximum port speed, but if we ran the same tests on a GigE line, the difference would be even more dramatic. Proof: HOU02 @ 1Gbps v. SNG01 @ 1Gbps

Let's apply our experiment to a real-world example: Half of our site's user base is in Paris and the other half is in Singapore. If we chose to host our cloud infrastructure exclusively from Paris, our users would see dramatically different results. Users in Paris would have sub-10ms latency while users in Singapore have about 300ms of latency. Obviously, operating cloud servers in both markets would be the best way to ensure peak performance in both locations, but what if you can only afford to provision your cloud infrastructure in one location? Where would you choose to provision that infrastructure to provide a consistent user experience for your audience in both markets?

Given what we've learned, we should probably choose a location with roughly the same latency to both markets. We can use the SoftLayer Looking Glass to see that San Jose, California (SJC01) would be a logical midpoint ... At this second, the latency between SJC and PAR on the SoftLayer backbone is 149ms, and the latency between SJC and SNG is 162ms, so both would experience very similar performance (all else being equal). Our users in the two markets won't experience mind-blowing speeds, but neither will experience mind-numbing speeds either.

The network performance implications of physical distance apply to all cloud providers, but because of the SoftLayer global network backbone, we're able to control many of the variables that lead to higher (or inconsistent) latency to and from a given data center. The longer a single provider can route traffic, the more efficiently that traffic will move. You might see the same latency speeds to another provider's cloud infrastructure from a given location at a given time across the public Internet, but you certainly won't see the same consistency from all locations at all times. SoftLayer has spent millions of dollars to build, maintain, and grow our global network backbone to transport public and private network traffic, and as a result, we feel pretty good about claiming to provide the best network performance in cloud computing.

-@khazard

March 25, 2015

Introducing New Block Storage and File Storage

Everyone knows data growth is exploding. The chart below illustrates data growth—in zettabytes—over the last 11 years.

Storing all that data can get complicated. The rise of cloud computing and virtualization has led to myriad options for data storage. Kevin Trachier did a great job of defining and highlighting the differences in various cloud storage options in his blog post, Which storage solution is best for your project?

Today, I’m excited to announce that we’ve expanded SoftLayer’s cloud storage portfolio to include two new storage products: block storage and file storage, both featuring Performance and Endurance options. These storage offerings allow you to create storage volumes or shares and connect them to your bare metal or virtual servers using either NFS or iSCSI connectivity.

The Endurance and Performance classes of both block storage and file storage feature:

  • Storage sizes to fit any application—from 20GB to 12TB
  • Highly available connectivity—redundant networking connections reduce risk and mitigate against unplanned events to provide business continuity
  • Allocated IOPS—meet any workload requirement through customizable levels of IOPS that are there when you need them
  • Durable and Resilient —infrastructure provides safety of mind against data loss without managing system-level RAID arrays
  • Concurrent Access—multiple hosts can simultaneously access both block and file volumes in support of advanced use cases such as clustered databases

The Endurance class of both block storage and file storage is available in three tiers, allowing you can choose the right balance of performance and cost for your needs:

  • 0.25 IOPS per GB is designed for workloads with low I/O intensity. Example applications include storing mailboxes or departmental level file shares.
  • 2 IOPS per GB is designed for most general purpose use. Example applications include hosting small databases backing Web applications or virtual machine disk images for a hypervisor.
  • 4 IOPS per GB is designed for higher intensity workloads. Example applications include transactional and other performance-sensitive databases.

All Endurance tiers support snapshots and replication to remote data centers.

We designed the Performance class of both block storage and file storage to support high I/O applications like relational databases that require consistent levels of performance. Block volumes and file shares can be provisioned with up to 6,000 IOPS and 96MB/s of throughput.

Available sizes and IOPS combinations:

Block storage and file storage are available in SoftLayer data centers worldwide. SoftLayer customers can log in to the customer portal and start using them today.

-Michael

March 23, 2015

Redefining the Startup Accelerator Business Model: An Interview with HIGHLINE’S Marcus Daniels

In this interview, SoftLayer’s community development lead in Canada, Qasim Virjee, sits down with Marcus Daniels, the co-founder and CEO of HIGHLINE, a venture-backed accelerator based in Vancouver and Toronto.

QV: Y Combinator has become an assumed standard for accelerators by creating its own business model. What do you think is both good and bad about this?

MD: Y Combinator (YC) not only created a new model for funding tech startups, but it also evolved the whole category. Historically, I like to think that Bill Gross's Idealab represented accelerator/incubator 1.0 and YC evolved that to 2.0 over the past decade, resulting in a hit parade of meaningful startups that are changing the world.

The good is that YC has created a “high quality” bar and led the standardization of micro-seed investment docs for the betterment of the whole startup ecosystem. It proved the model and has helped hundreds of amazing founders with venture profile businesses that are changing the world.

The bad is that there are now thousands of accelerators/incubators globally running generic programs that don't help founders much. More than half have a horrible rate helping startups raise follow-on capital and almost all never had a single exit from a startup they invested in.

HIGHLINE has a strong track record in our short history and now sees a big opportunity to be amongst the leaders in the evolution of the accelerator industry.

QV: Many accelerators focus on streamlining a program to process cohorts of companies at regular intervals throughout the year, every year. Often, the high throughput these programs expect means they must select companies from applications, rather than the approach you seem to be taking. Can you explain how HIGHLINE is sourcing companies for investment?

MD: HIGHLINE gets over 800 applications a year and targets about 20–30 investments during that time. Out of our last 12 investments, all had either come from referral partners or the team hunting the best founders to be part of our portfolio. Over the years, we have moved from the ideation stage, which comprises the majority of inbound applications, to the MVP in market stage, which is our sweet spot now. We will also focus on low-volume, high-touch advisory support, which is why a lot of time is spent building relationships with founders and adding value to MVP-stage startups before investing helps curate better deals.

QV: Traditionally, investment vehicles (such as VC firms and accelerator programs) have been run by financial industry types, but it seems that you are taking a more entrepreneurial approach with HIGHLINE and constantly evolving your business model. What can you tell me about this?

MD: The best accelerator leaders globally are past entrepreneurs who have some investment experience given how hands-on you have to be with the companies. Without the experience of starting and growing ventures, it is really hard to help tech founders navigate the daily challenges. Also, the best founders get to choose, and they want to work with other top founders in a long-term mentor/advisory/coaching relationship.

QV: How does being “VC-backed” differentiate HIGHLINE from other accelerators?

MD: Having several VCs as investors, such as the BDC and Relay Ventures, gives us an edge in several ways. Firstly, they are not only a great quality referral network for deals, but also a huge help in getting our companies venture-ready—even if they may not invest directly. Secondly, they allow us to internally focus on a specialization in helping venture profile businesses raise follow-on capital, as opposed to the glut of programs that are optimized for entrepreneurial education and lifestyle job creation. Lastly, they put big pressure on the whole HIGHLINE team to both get results for shareholders and build something unique that can be a category leader over the next decade.

QV: Our country is physically large and this seems to have created differentiated tech startup scenes between its cities. How does HIGHLINE collapse the geographic divide by having a physical presence in both Vancouver and Toronto?

MD: HIGHLINE tries to curate and unite the best digital founders, institutional investors, and ecosystem partners across Canada. We position our offices in both Vancouver and Toronto as portfolio hubs for founders who want to be headquartered in Canada, but want to take on the world. Most importantly, we spend time in all major Canadian startup ecosystems and have plans for unique events to bring our curated community closer together.

- Qasim

March 12, 2015

Sydney’s a Go

Transforming an empty room into a fully operational data center in just three months: Some said it couldn’t be done, but we did it. In less than three months, actually.

Placing a small team on-site and turning an empty room into a data center is what SoftLayer refers to as a Go Live. Now, of course there is more to bringing a data center online than the just the transformation of an empty room. In the months leading up to the Go Live deployment, there are details to work out, contracts to sign, and the electrical fit out (EFO) of the room itself. During my time with SoftLayer I have been involved in building several of our data centers, or SoftLayer pods as we call them. Pods are designed to facilitate infrastructure scalability, and although they have evolved over the years as newer, faster equipment has become available, the original principles behind the design are still intact—so much so that a data center technician could travel to any SoftLayer data center in the world and start working without missing a beat. And the same holds true to building a pod from the ground up. This uniformity is what allows us to fast track the build out of a new SoftLayer pod. This is one of the reasons why the Sydney data center launch was such a success.

Rewind Three Months

When we landed in Sydney on December 11, 2014, we had an empty server room and about 125 pallets of gear and equipment that had been carefully packed and shipped by our inventory and logistics team. First order of business: breaking down the pallets, inspecting the equipment for any signs of damage and checking that we received everything needed for the build. It’s really quite impressive to know that everything from screwdrivers to our 25U routers to even earplugs had been logged and accounted for. When you are more than 8,500 miles away from your base of operations, it’s imperative that the Go Live team has everything it needs on hand from the start. Something seemingly inconsequential as not having the proper screws can lead to costly delays during the build. Once everything’s been checked off, the real fun begins.


(From Left) Jackie Vong, Dennis Vollmer, Jon Bowden, Chris Stelly, Antonio Gomez, Harpal Singh, Kneeling - Zachary Schacht, Peter Panagopoulos, and Marcelo Alba

Next we set up the internal equipment that powers the pod: four rows of equipment that encompass everything from networking gear to storage to the servers that run various internal systems. Racking the internal equipment is done according to pre-planned layouts and involves far too many cage nuts, the bane of every server build technician’s existence.

Once the internal rows are completed, it’s time to start focusing on the customer rows that will contain bare metal and virtual servers. Each customer rack contains a minimum of five switches—two for the private network, two for the public network, and one out-of-band management switch. Each row has two power strips and in the case of the Sydney data center, two electrical transfer switches at the bottom of the rack that provide true power redundancy by facilitating the transfer of power from one independent feed to another in the case of an outage. Network cables from the customer racks route back to the aggregate switch rack located at the center of each row.

Right around the time we start to wrap up the internal and customer rows, a team of network engineers arrive on-site to run the interconnects between the networking gear and the rest of the internal systems and to light up the fiber lines connecting our new pod to our internal network (as well as the rest of the world). This is a big day because not only do we finally get Wi-Fi up in the pod, but no longer are we isolated on an island. We are connected, and teams thousands of miles away can begin the process of remotely logging in to configure, deploy, and test systems. The networking team will start work on configuring the switches, load balancers, and firewalls for their specific purposes. The storage team will begin the process of bringing massive storage arrays online, and information systems will start work on deploying the systems that manage the automation each pod provides.


(From Left) Zach Robbins, Grayson Schmidt, Igor Gorbatok and Alex Abin

During this time, we start the process of onboarding the newest members of the team, the local Sydney techs, who in a few short months will be responsible for managing the data center independently. But before they fully take over, customer racks are prepped and are waiting to house the final piece of the puzzle: the servers. They arrive via truck day [check out DAL05 Pod 2 truck day]; Sydney’s was around the beginning of February. Given the amount of hardware we typically receive, truck days are an event unto themselves—more than 1,500 of the newest and fastest SuperMicro servers of various shapes and sizes that will serve as the bare metal and virtual servers for our customers. Through a combination of manpower and automation, these servers get unboxed, racked, checked in, and tested before they are sold to our customers.

Now departments involved in bringing the Sydney data center online wrap up and sign off. Then we go live.

Bringing a SoftLayer pod online and on time is a beautifully choreographed process and is one of my greatest professional accomplishments. The level of coordination and cohesion required to pull it off, not once, not twice but ten times all over the world in the last year alone can’t be overstated enough.

-Dennis

March 6, 2015

The SLayer Standard Vol. 1 No. 7: the IBM InterConnect Edition

Last week, an estimated 21,000 IBMers, SLayers, customers and partners from around the world flooded Las Vegas, Nev. to attend the first-ever IBM InterConnect. This new conference combined three popular IBM conferences (Impact, Innovate and Pulse) into a single, premier cloud and mobile techno-topia.

What our engineers and developers did in Las Vegas after conference hours might have stayed in Las Vegas, but IBM’s InterConnect hits and announcements didn’t. Here’s a recap:

Speed to Market Wins the Cloud Computing Race
Everyone likes to go fast, and the new senior vice president for IBM Cloud, Robert LeBlanc, likes to go super-fast. “What I’m focusing on is speed,” LeBlanc says.

In this blink-and-the-market-changes world, time-to-market determines the winners and losers in cloud computing. Part of LeBlanc’s strategy is opening new SoftLayer datacenters. If you haven’t heard the news, SoftLayer will be launching Sydney and Montreal data centers in the next 30 days — with more coming soon. Stay tuned for more locations.

Read more on how LeBlanc plans to win the cloud business race.

Cloudy skies on the horizon—that’s a good thing!
Our CEO, Ginni Rometty, announced a $4 billion investment on cloud services (shared with the data analytics and mobile businesses). She’s hoping that the investment will spur $40 billion a year in revenue come 2018.

Signs of the investment could be seen as execs at InterConnect announced new hybrid services coming in 2015, including enterprise containers. [What’s a container? Read our blog post.]

In fact, hybrid was a big theme at InterConnect. “We are going to make all those clouds act like one,” says Angel Diaz, vice president of IBM cloud technologies. IBM cloud (powered by SoftLayer) will be a one-stop shop: a cloud superstore with a smorgasbord of aaS offerings.

It looks like it’ll be an exciting ride for IBM over the next couple of years. Make sure to keep up with the headlines for more announcements in the coming months.

-JRL

Categories: 
Subscribe to softlayer