Posts Tagged 'Data Center'

December 17, 2014

Does physical location matter “in the cloud”?

By now everyone understands that the cloud is indeed a place on Earth, but there still seems to be confusion around why global expansion by way of adding data centers is such a big deal. After all, if data is stored “in the cloud,” why wouldn’t adding more servers in our existing data centers suffice? Well, there’s a much more significant reason for adding more data centers than just being able to host more data.

As we’ve explained in previous blog posts, Globalization and Hosting: The World Wide Web is Flat and Global Network: The Proof is in the Traceroute, our strategic objective is to get a network point of presence (PoP) within 40ms of all our users (and our users' users) in order to provide the best network stability and performance possible anywhere on the planet.

Data can travel across the Internet quickly, but just like anything, the farther something has to go, the longer it will take to get there. Seems pretty logical right? But we also need to take into account that not all routes are created equally. So to deliver the best network performance, we designed our global network to get data to the closest route possible to our network. Think of each SoftLayer PoP as an on-ramp to our global network backbone. The sooner a user is able to get onto our network, the quicker we can efficiently route them through our PoPs to a server in one of our data centers. Furthermore, once plugged into the network, we are able to control the flow of traffic.

Let’s take a look at this traceroute example from the abovementioned blog post. As you are probably aware, a traceroute shows the "hops" or routers along the network path from an origin IP to a destination IP. When we were building out the Singapore data center (before the network points of presence were turned up in Asia), the author ran a traceroute from Singapore to SoftLayer.com, and immediately after the launch of the data center, ran another one.

Pre-Launch Traceroute to SoftLayer.com from Singapore

traceroute to softlayer.com (66.228.118.53), 64 hops max, 52 byte packets
 1  10.151.60.1 (10.151.60.1)  1.884 ms  1.089 ms  1.569 ms
 2  10.151.50.11 (10.151.50.11)  2.006 ms  1.669 ms  1.753 ms
 3  119.75.13.65 (119.75.13.65)  3.380 ms  3.388 ms  4.344 ms
 4  58.185.229.69 (58.185.229.69)  3.684 ms  3.348 ms  3.919 ms
 5  165.21.255.37 (165.21.255.37)  9.002 ms  3.516 ms  4.228 ms
 6  165.21.12.4 (165.21.12.4)  3.716 ms  3.965 ms  5.663 ms
 7  203.208.190.21 (203.208.190.21)  4.442 ms  4.117 ms  4.967 ms
 8  203.208.153.241 (203.208.153.241)  6.807 ms  55.288 ms  56.211 ms
 9  so-2-0-3-0.laxow-cr1.ix.singtel.com (203.208.149.238)  187.953 ms  188.447 ms  187.809 ms
10  ge-4-0-0-0.laxow-dr2.ix.singtel.com (203.208.149.34)  184.143 ms
    ge-4-1-1-0.sngc3-dr1.ix.singtel.com (203.208.149.138)  189.510 ms
    ge-4-0-0-0.laxow-dr2.ix.singtel.com (203.208.149.34)  289.039 ms
11  203.208.171.98 (203.208.171.98)  187.645 ms  188.700 ms  187.912 ms
12  te1-6.bbr01.cs01.lax01.networklayer.com (66.109.11.42)  186.482 ms  188.265 ms  187.021 ms
13  ae7.bbr01.cs01.lax01.networklayer.com (173.192.18.166)  188.569 ms  191.100 ms  188.736 ms
14  po5.bbr01.eq01.dal01.networklayer.com (173.192.18.140)  381.645 ms  410.052 ms  420.311 ms
15  ae0.dar01.sr01.dal01.networklayer.com (173.192.18.211)  415.379 ms  415.902 ms  418.339 ms
16  po1.slr01.sr01.dal01.networklayer.com (66.228.118.138)  417.426 ms  417.301 ms
    po2.slr01.sr01.dal01.networklayer.com (66.228.118.142)  416.692 ms
17  * * *

Post-Launch Traceroute to SoftLayer.com from Singapore

traceroute to softlayer.com (66.228.118.53), 64 hops max, 52 byte packets
 1  192.168.206.1 (192.168.206.1)  2.850 ms  1.409 ms  1.206 ms
 2  174.133.118.65-static.reverse.networklayer.com (174.133.118.65)  1.550 ms  1.680 ms  1.394 ms
 3  ae4.dar01.sr03.sng01.networklayer.com (174.133.118.136)  1.812 ms  1.341 ms  1.734 ms
 4  ae9.bbr01.eq01.sng02.networklayer.com (50.97.18.198)  35.550 ms  1.999 ms  2.124 ms
 5  50.97.18.169-static.reverse.softlayer.com (50.97.18.169)  174.726 ms  175.484 ms  175.491 ms
 6  po5.bbr01.eq01.dal01.networklayer.com (173.192.18.140)  203.821 ms  203.749 ms  205.803 ms
 7  ae0.dar01.sr01.dal01.networklayer.com (173.192.18.253)  306.755 ms
    ae0.dar01.sr01.dal01.networklayer.com (173.192.18.211)  208.669 ms  203.127 ms
 8  po1.slr01.sr01.dal01.networklayer.com (66.228.118.138)  203.518 ms
    po2.slr01.sr01.dal01.networklayer.com (66.228.118.142)  305.534 ms
    po1.slr01.sr01.dal01.networklayer.com (66.228.118.138)  204.150 ms
 9  * * *

After the Singapore data center launch, the number of hops was reduced by 50 percent, and the response time (in milliseconds) was reduced by 40 percent. Those are pretty impressive numbers from just lighting up a couple PoPs and a data center, and that was just the beginning of our global expansion in 2012.

That’s why we are so excited to announce the three new data centers launching this month: Mexico City, Tokyo, and Frankfurt.



Of course, this is great news for customers who require data residency in Mexico, Japan, and Germany. And yes, these new locations provide additional in-region redundancy within APAC, EMEA, and the Americas. But even customers without servers in these new facilities have reason to celebrate: Our global network backbone is expanding, so users in these markets will see even better network stability and speed to servers in every other SoftLayer data center around the world!

-JRL

October 14, 2014

Enterprise Customers See Benefits of Direct Link with GRE Tunnels

We’ve had an overwhelming response to our Direct Link product launch over the past few months and with good reason. Customers can cross connect into the SoftLayer global private network with a direct link in any of our 22 points of presence (POPs) providing fast, secure, and unmetered access to their SoftLayer infrastructure from their remote data center locations.

Many of our enterprise customers who’ve set up a Direct Link want to balance the simplicity of a layer three cross connection with their sophisticated routing and access control list (ACL) requirements. To achieve this balance, many are using GRE tunnels from their on-premises routers to their SoftLayer Vyatta Gateway Appliance.

In previous blogs about Vyatta Gateway Appliance, we’ve described some typical use cases as well as highlighted the differences between the Vyatta OS and the Vyatta Appliance. So we’ll focus specifically on using GRE tunnels here.

What is GRE?
Generic Routing Encapsulation (GRE) is a protocol for packet encapsulation to facilitate routing other protocols over IP networks (RFC 2784). Customers typically create two endpoints for the tunnel; one on their remote router and the other on their Vyatta Gateway Appliance at SoftLayer.
How does GRE work?
GRE encapsulates a payload, an inner packet that needs to be delivered to a destination network, within an outer IP packet. Between two GRE endpoints all routers will look at the outer IP packet and forward it towards the endpoint where the inner packet is parsed and routed to the ultimate destination.
Why use GRE tunnels?
If a customer has multiple subnets at SoftLayer that need routing to, these would need multiple tunnels to each if they were not encapsulating with GRE. Since GRE encapsulates traffic within an outer packet, customers are able to route other protocols within the tunnel and route multiple subnets without multiple tunnels. A GRE endpoint on Vyatta will parse the packets and route them, eliminating that challenge.

Many of our enterprise customers have complex rules governing what servers and networks can communicate with each other. They typically build ACLs on their routers to enforce those rules. Having a GRE endpoint on a Vyatta Gateway Appliance allows customers to route and manage internal packets based on specific rules so that security models stay intact.

GRE tunnels can allow customers to keep their networking scheme; meaning customers can add IP addresses to their SoftLayer servers and directly access them eliminating any routing problems that could occur.

And, because GRE tunnels can run inside a VPN tunnel, customers can put the GRE inside of an IPSec tunnel to make it more secure.

Learn More on KnowledgeLayer

If you are considering Direct Link to achieve fast and unmetered access with the help of GRE tunnels and Vyatta Gateway Appliance but need more information, the SoftLayer KnowledgeLayer is continually updated with new information and best practices. Be sure to check out the entire section devoted to the Vyatta Gateway Appliance.

- Seth

Categories: 
October 8, 2014

An Insider’s Look at Our Data Centers

I’ve been with Softlayer over four years now. It’s been a journey that has taken me around the world—from Dallas to Singapore to Washington D.C, and back again. Along the way, I’ve met amazingly brilliant people who have helped me sharpen the tools in my ‘data center toolbox’ thus allowing me to enhance the customer experience by aiding and assisting in a complex compute environment.

I like to think of our data centers as masterpieces of elegant design. We currently have 14 of these works of art, with many more on the way. Here’s an insider’s look at the design:

Keeping It Cool
Our POD layouts have a raised floor system. The air conditioning units chill from the front bottom of the servers on the ‘cold rows’ passing through the servers on the ‘warm rows.’ The warm rows have ceiling vents to rapidly clear the warm air from the backs of the servers.

Jackets are recommended for this arctic environment.

Pumping up the POWER
Nothing is as important to us as keeping the lights on. Every data center has a three-tiered approach to keeping your servers and services on. Our first tier being street power. Each rack has two power strips to distribute the load and offer true redundancy for redundant servers and switches with the remote ability to power down an individual port on either power strip.

The second tier is our batter backup for each POD. This offers emergency response for seamless failover when street power is no more.

This leads to the third step in our model, generators. We have generators in place for a sustainable continuity of power until street power has returned. Check out the 2-megawatt diesel generator installation at the DAL05 data center here.

The Ultimate Social Network
Neither power nor cooling matter if you can’t connect to your server, which is where our proprietary networking topography comes to play. Each bare metal server and each virtual server resides in a rack that connects to three switches. Each of those switches connects to an aggregate switch for a row. The aggregate switch connects to a router.

The first switch, our private backend network, allows for SSL and VPN connectivity to manage your server. It also gives you the ability to have server-to-server communication without the bounds of bandwidth overages.

The second switch, our public network, provides pubic Internet access to your device, which is perfect for shopping, gaming, coding, or whatever you want to use it for. With 20TB of bandwidth coming standard for this network, the possibilities are endless.

The third and final switch, management, allows you to connect to the Intelligent Platform Management Interface that provides tools such as KVM/hardware monitoring/and even virtual CDs to install an image of your choosing! The cables to your devices from the switches are color-coded, port-number-to-rack-unit labeled, and masterfully arranged to maximize identification and airflow.

A Soft Place for Hardware
The heart and soul of our business is the computing hardware. We use enterprise grade hardware from the ground up. We offer our smallest offering of 1 core, 1GB RAM, 25GB HDD virtual servers, to one of our largest quad 10-core, 512GB RAM, multi 4TB HDD bare metal servers. With excellent hardware comes excellent options. There is almost always a path to improvement. Meaning, unless you already have the top of the line, you can always add more. Whether it be additional drive, RAM, or even processor.

I hope you enjoyed the view from the inside. If you want to see the data centers up close and personal, I am sorry to say, those are closed to the public. But you can take a virtual tour of some of our data centers via YouTube: AMS01 and DAL05

-Joshua Fox

October 6, 2014

G’day, Melbourne! SoftLayer’s LIVE in Australia.

Today, we’re excited to announce the launch of the newest SoftLayer data center in Melbourne, Australia! This facility is our first on the continent (with Sydney planned for later in the year), and it delivers that trademark SoftLayer service to our clients Down Under.

Our Aussie Mates

Over the years, our customer base has grown phenomenally in Australia, and it should come as no surprise that customers in the region have been clamoring for a SoftLayer data center Down Under to bring high performance cloud infrastructure even closer to them. These customers have grown to immense proportions with ahead-of-their-time value propositions and innovative ideas that have turned heads around the world.

A perfect example of that kind of success is HotelsCombined.com, an online travel platform designed to streamline the process of searching for and reserving hotel rooms around the world. Their story is nothing short of brilliant. A startup in 2005, they today serve more than 25 million visitors a month, has more than 20,000 affiliates, and a database of 800,000+ properties worldwide.

HotelsCombined.com partnered with SoftLayer to provision bare metal servers, virtual servers, load balancers, and redundant iSCSI storage around the world to best serve their global customer base. Additionally, they implemented data warehouse and predictive analytics capabilities on SoftLayer for their real-time predictive models and business intelligence tools.

Another great story is that of The Loft Group. I wrote about how they chose our cloud platform to roll out their Digital Learning Platform in a previous blog. They needed performance, analytics, monitoring, and scalability to accommodate their massive growth, and we were able to help.

Benefiting Down Under

Many of you have seen news about IBM’s plans to expand SoftLayer into Australia for a few months now. In fact, at the recent IBM Cloud Pre-Launch event (view the full event on demand here), Lance Crosby shared our vision for the region and the synergy that we are looking to create in the market.

Our expansion into Melbourne means that our customers have even more choice and flexibility when building their cloud infrastructure on our platform. With Australian data residency, many of our customers in Australia with location-sensitive workloads or regulatory/compliance data requirements immediately benefit from the new location. Additionally, with network points of presence in Sydney and Melbourne, users in Australia will see even better network performance when connecting to servers in any SoftLayer data center around the world. Users looking for additional redundancy in APAC have another location for their data, and customers who want to replicate data as though they are in the same rack can do so between Australia and one of our other locations.

Let the Bash Commence

To celebrate this exciting milestone, we have quite a few things lined up for the region. First up, a special promotion for all those who would like to check out the performance of this facility—new customers and our existing loyalists. You can get US$500 off on your first month's order (bare metal, private virtual, public virtual—anything and everything listed in our store!) for the Melbourne data center. More details on the promo, features, and services are available here.

Next up—parties! We have a couple of networking events planned. SoftLayer customers, partners, enthusiasts, and friends are invited to join us in Melbourne on October 9, and Auckland, New Zealand, on October 15 for a fun evening with SLayers and peers. If you’re in the area and want more details, email us at marketingAP@softlayer.com with the following information:

  • Subject: I Would Like to Attend SoftLayer Night: Celebrating Data Centre Go-Live
  • Body: Your Name, contact phone number, city where you would like to attend, and one line about why you would like to attend.

Space is limited, and you don’t have much time to reserve your spot, so let us know as soon as possible.

These are exciting times. I’m extremely eager to see how Australian businesses leverage these new in-country facilities and capabilities. Stay tuned for new stories as we hear from other happy customers.

Cheers.
@namrata_kapur

September 22, 2014

Becoming a SLayer in Hong Kong

When I came on board at SoftLayer, the company was at the beginning of a growth period. IBM had just invested $1.2 billion to build 15 new data centers all over the world including one in Hong Kong—I was excited to get to work there!

Before I joined the Hong Kong data center’s Go Live Team as a server build tech, I went through a lengthy interview process. At the time, I was working for a multinational bank. But after the Chinese New Year, something inside me said it was time to take on a new challenge. Many people in Chinese cities look for new opportunities around the New Year; they believe it will give them luck and fortune.

After much anticipation (and interviews and paperwork), my first day was finally here. When I arrived at the SoftLayer data center, I walked through glass security doors and was met by Jesse Arnold, SoftLayer’s Hong Kong site manager; Russell Mcguire, SoftLayer’s Go Live Team leader whom I met during my interview process; and Shahzad, my colleague who was also starting work that day.

Shahzad and I felt very welcomed and were excited to be joining the team. During our first-day tour, I took a deep breath and said to myself, “You can do this Ying! This is transition, and we never stop learning new things in life.” Learning new things can be challenging. It involves mental, physical, and emotional strength.

Inside the Data Center: Building Racks!

When our team began to build racks and work with cables it was uncharted, but not totally unfamiliar territory for me. For a time, I worked as a seafarer cadet electrician on a container ship. I have worked with cables, electric motors, and generators before—it was just in the middle of the ocean. So, needless to say I know cables, but SFP cables were new. With the help of my colleagues and the power of the Internet, I was on my way and cabling the data center in no time.

When we build a server, we check everything: the motherboard, processors, RAM, hard drives, and most importantly, OS compatibility. After learning those basics, I started to look at it like a big puzzle that I needed to solve.

Inside the Data Center: Strong Communication!

That wasn’t the only challenge. In order to do my job successfully and adhere to data center build procedures, I had to learn the best way to communicate with my colleagues.

In the data center, our team must relay messages precisely and provide all the details to ensure every step in the build-out process is done correctly. Jesse constantly reminds us what is important: communication, communication, communication. He always repeats it three times to emphasize it as a golden rule. To me, this is one sign of a successful leader. I’m glad Jesse has put a focus on communication because it is helping me learn what makes a good leader and SLayer.

Inside the Data Center: Job Satisfaction!

I am so happy to be working at SoftLayer. All the new challenges I’ve been faced with remind me of Nike’s slogan: Just Do It! And our young team is doing just that. We work six days a week for 14 hours a day. And for all of that time, I use my mental and physical strength to tackle my new job.

I’ve learned so much and am excited to expand the knowledge base I already have, so I can be a stronger asset to the SoftLayer team.

I consider myself a SLayer that is still-in-training because there is more to being a SLayer than just building racks. SLayers are the dedicated people that work at SoftLayer, and they’re my colleagues. As my training continues, I look forward to learning more and to continue gaining more skills. I don't want to get old without learning new things!

For all our readers in Asia below you will find the blog in Mandarin translation!

在我刚刚来到SoftLayer的时候,它正处于发展的初级阶段。那时候,IBM公司正投资了120万在世界各地建立数据中心,其中一个在香港。我非常荣幸我可以在这里工作!

在我加入香港数据中心——Go Live Team,成为一个服务器构建技术员以前,我经历了一个很长的面试过程。当时,我正在为一家跨国银行工作。然而,中国农历新年以后,我的内心告诉我,是时候要迎接新的挑战了。很多中国人在新年的时候寻求新的工作机会,他们相信,这会给他们带来好运和财富。

经过一番前期工作(还有采访和文书工作),我终于迎来了新的第一天。当我来到SoftLayer数据中心的时候,我穿过玻璃安全门,见到了SoftLayer香港站的经理——Jesse Arnold,我曾经采访时遇到的SoftLayer里Go Live Team的组长——Russell Mcguire,还有Shahzad,和我一样第一天开始工作的同事。

Shahzad和我都觉得非常的开心和兴奋能够加入这个组。在我们第一天工作的时候,我深深地吸了一口气,对自己说:你可以做到!这是一个进步的过程。我们从不会停止学习新的东西。学习新的东西是很有挑战性的,它包含了心理、身体和精神的力量。

在数据中心里面:建筑架!
当我们的团队开始构建建筑架和电缆的时候,它们都是新的东西。但不是完全不熟悉它们。以前,我的工作是在集装箱船的海员电工。那时候我的工作和电缆、发动机、发电机打交道,虽然它们都只是在海里,但是,我很确定我了解电缆,我很容易的上手了数据中心的工作。

当我们建立一个服务器的时候,我们得检查每一样东西:主板、处理器、内存、硬盘,还有最重要的,操作系统的兼容性。了解了这些基本的东西以后,我把它当做一个摆在面前的难题,认真地对待。

在数据中心里面:很强的沟通能力!
这并不是唯一的挑战。为了成功地做好我的工作,在建立数据中心的过程中,我必须学会用最佳方式和我的同事沟通。

在数据中心,我们的的团队必须精确地传送信息,并提供所有的细节,以确保扩建过程中每一个步骤正确地完成。Jesse不断地提醒我们,沟通交流是非常重要的。他强调沟通是黄金规则。对我来说,这是一个成功领导者的标志之一。我很高兴Jesse已经把重点放在沟通作为重点,因为它帮助我学习,什么是一名优秀的领导者。

在数据中心里面:工作满意度!
我很高兴可以在SoftLayer工作。面对所以新的挑战,我都度自己说:放手去做!我们年轻的团队都在努力。我们每周工作六天,每天14小时。那段时间内,我把我所有的精力都投入到了我的新工作中。

我从我的经历中学到了很多,增长了很多知识。所以我可以说,我给SoftLayer团队带来了价值。

我把自己当做一个让在学习进步的技术员,因为一个技术员不仅仅要会构架。精英是在SoftLayer执着工作的人们,他们是我的同事。由于我正处于训练学习阶段,我期待学习更多知识和技能。活到老,学到老!

- Ying

August 20, 2014

SoftLayer is in Canada, eh?

Last week, we celebrated the official launch of our Toronto (TOR01) data center—the fourth new SoftLayer data center to go live in 2014, and our first in Canada! To catch you up on our progress this year, we unveiled a data center in Hong Kong in June to provide regional redundancy in Asia. In July, we added similar redundancy in Europe with the grand opening of our London data center, and we cut the ribbon on a SoftLayer data center designed specifically for federal workloads in Richardson, TX. The new Toronto location joins our data center pods in Washington, D.C., as our second location in the northeast region of North America.

As you can imagine, our development and operations teams have been working around the clock to get these new facilities built, so they were fortunate to have Tim Hortons in Toronto to keep them going. Fueled by countless double-doubles and Timbits, they officially brought TOR01 online August 11! This data center launch is part of IBM’s massive $1.2 billion commitment to in expanding our global cloud footprint. A countless number of customers have asked us when we were going to open a facility in Canada, so we prioritized Toronto to meet that demand. And because the queue had been building for so long, as soon as the doors were opened, we had a flood of new orders to fulfill. Many of these customers expressed a need for data residency in Canada to handle location-sensitive workloads, and expanding our private network into Canada means in the region will see even better network performance to SoftLayer facilities around the world.

Here are what a few of our customer had to say about the Toronto launch:

Brenda Crainic, CTO and co-founder of Maegan said, “We are very excited to see SoftLayer open a data center in Toronto, as we are now expanding our customer base in Canada. We are looking forward to host all our data in Canada, in addition to their easy-to-use services and great customer service."

Frederic Bastien, CEO at mnubo says, “We are very pleased to have a data center in Canada. Our customers value analytics performance, data residency and privacy, and deployment flexibility—and with SoftLayer we get all that and a lot more! SoftLayer is a great technology partner for our infrastructure needs.”

With our new data center, we’re able to handle Canadian infrastructure needs from A to Zed.

While we’d like to stick around and celebrate with a Molson Canadian or two, our teams are off to the next location to get it online and ready. Where will it be? You won’t have to wait very long to find out.

I’d like to welcome the new Canucks (both employees and customers) to SoftLayer. If you’re interested in getting started with a bare metal or virtual server in Canada, we’re running a limited-time launch promotion that’ll save up to $500 on your first order in Toronto: Order Now!

-John

P.S. I included a few Canadianisms in this post. If you need help deciphering them, check out this link.

March 19, 2014

An Inside Look at IBM Cloud Event 2014 in Hong Kong

On March 17 in Hong Kong, IBM and SoftLayer successfully concluded the first of many intimate cloud events. IBM Cloud Event 2014 marked the beginning of the $1.2 billion investment committed towards our global expansion plans.

Growing from 13 to 40 data centers is no mean feat, and Hong Kong is the starting point. Not only does this give our customers data redundancy in Asia-Pacific, but also provides data residency to our Hong Kong-based customers. Quite simply, we are growing where you want to grow.

For me, there were three key takeaways from the event.

We’re seeing overwhelming support from our customers.
Not only did we have an opportunity to host our Hong Kong clientele, but many also traveled from cities in Greater China to be a part of this milestone. It was immensely gratifying to see them being vocal advocates of SoftLayer services. Natali Ardianto from Tiket.com, Chris Chun from 6waves and Larry Zhang representing ePRO all shared their brilliant stories with the audience.

Tiket.com’s co-founder, Natali, is especially proud of the fact that the company sold out 6,000 tickets for the K-Pop Big Bang Alive concert in 10 minutes, while their competitor’s site was unable to meet the huge demand and shut down for four hours during the peak period. Tiket.com, founded in 2011, faced TCP, DoS and DDoS attacks and tried hosting unsuccessfully on two different IaaS providers before moving to SoftLayer’s infrastructure services in 2012.

6Waves, a gaming publisher, was started in 2008. Today, built on SoftLayer, 6waves has grown to the #1 third-party publisher on Facebook. 6waves manages 14 million monthly active users and 2 million daily active users. Chris, 6waves’ CTO and co-founder, shared that since 2009 6waves has launched more than 200 games on SoftLayer.

Larry Zhang, ePRO’s senior IT manager and architect, had a similar story to share. The B2C e-commerce platform, part of China-based DX Holdings, supports more than 200,000 items in 15 categories and saw a 66 percent increase in customers from October 2011 to September 2013. ePRO is now looking to cater to the US and Australian markets, and Larry believes that SoftLayer’s aggressive expansion plans will help them meet their goal.

SoftLayer in Hong Kong

There is a vested interest in the SoftLayer-IBM integration roadmap.
Large enterprises are moving towards the cloud. This is not a forward-looking statement, it's a fact. And from the feedback gathered and the questions put up by these organizations, it is clear that they are investing in leveraging cloud services for improving their internal processes and for bringing services to their end customers more quickly. Lance Crosby presented a SoftLayer-IBM integration roadmap. With SoftLayer forming the foundation of IBM's cloud offerings—SaaS, PaaS and BPaaS—there is no doubt that we are as invested in this partnership as our clientele.

The strong startup community in Hong Kong is committed to growing with Softlayer.
Catalyst, SoftLayer's startup incubator, has always had a strong presence in Hong Kong, and the startup spirit was evident on March 17 as well. The dedicated roundtable conducted for the community with Lance Crosby and Casey Lau, SoftLayer's Catalyst representative for APAC, was the highlight of the day. Lance left us with a powerful thought, "We are here to be an extension to your infrastructure... The question is what can you build on us."

All in all, this was a great start to our new journey!

- Namrata

August 22, 2013

Network Cabling Controversy: Zip Ties v. Hook & Loop Ties

More than 210,000 users have watched a YouTube video of our data center operations team cabling a row of server racks in San Jose. More than 95 percent of the ratings left on the video are positive, and more than 160 comments have been posted in response. To some, those numbers probably seem unbelievable, but to anyone who has ever cabled a data center rack or dealt with a poorly cabled data center rack, the time-lapse video is enthralling, and it seems to have catalyzed a healthy debate: At least a dozen comments on the video question/criticize how we organize and secure the cables on each of our server racks. It's high time we addressed this "zip ties v. hook & loop (Velcro®)" cable bundling controversy.

The most widely recognized standards for network cabling have been published by the Telecommunications Industry Association and Electronics Industries Alliance (TIA/EIA). Unfortunately, those standards don't specify the physical method to secure cables, but it's generally understood that if you tie cables too tight, the cable's geometry will be affected, possibly deforming the copper, modifying the twisted pairs or otherwise physically causing performance degradation. This understanding begs the question of whether zip ties are inherently inferior to hook & loop ties for network cabling applications.

As you might have observed in the "Cabling a Data Center Rack" video, SoftLayer uses nylon zip ties when we bundle and secure the network cables on our data center server racks. The decision to use zip ties rather than hook & loop ties was made during SoftLayer's infancy. Our team had a vision for an automated data center that wouldn't require much server/cable movement after a rack is installed, and zip ties were much stronger and more "permanent" than hook & loop ties. Zip ties allow us to tighten our cable bundles easily so those bundles are more structurally solid (and prettier). In short, zip ties were better for SoftLayer data centers than hook & loop ties.

That conclusion is contrary to the prevailing opinion in the world of networking that zip ties are evil and that hook & loop ties are among only a few acceptable materials for "good" network cabling. We hear audible gasps from some network engineers when they see those little strips of nylon bundling our Ethernet cables. We know exactly what they're thinking: Zip ties negatively impact network performance because they're easily over-tightened, and cables in zip-tied bundles are more difficult to replace. After they pick their jaws up off the floor, we debunk those myths.

The first myth (that zip ties can negatively impact network performance) is entirely valid, but its significance is much greater in theory than it is in practice. While I couldn't track down any scientific experiments that demonstrate the maximum tension a cable tie can exert on a bundle of cables before the traffic through those cables is affected, I have a good amount of empirical evidence to fall back on from SoftLayer data centers. Since 2006, SoftLayer has installed more than 400,000 patch cables in data centers around the world (using zip ties), and we've *never* encountered a fault in a network cable that was the result of a zip tie being over-tightened ... And we're not shy about tightening those ties.

The fact that nylon zip ties are cheaper than most (all?) of the other more "acceptable" options is a fringe benefit. By securing our cable bundles tightly, we keep our server racks clean and uniform:

SoftLayer Cabling

The second myth (that cables in zip-tied bundles are more difficult to replace) is also somewhat flawed when it comes to SoftLayer's use case. Every rack is pre-wired to deliver five Ethernet cables — two public, two private and one out-of-band management — to each "rack U," which provides enough connections to support a full rack of 1U servers. If larger servers are installed in a rack, we won't need all of the network cables wired to the rack, but if those servers are ever replaced with smaller servers, we don't have to re-run network cabling. Network cables aren't exposed to the tension, pressure or environmental changes of being moved around (even when servers are moved), so external forces don't cause much wear. The most common physical "failures" of network cables are typically associated with RJ45 jack crimp issues, and those RJ45 ends are easily replaced.

Let's say a cable does need to be replaced, though. Servers in SoftLayer data centers have redundant public and private network connections, but in this theoretical example, we'll assume network traffic can only travel over one network connection and a data center technician has to physically replace the cable connecting the server to the network switch. With all of those zip ties around those cable bundles, how long do you think it would take to bring that connection back online? (Hint: That's kind of a trick question.) See for yourself:

The answer in practice is "less than one minute" ... The "trick" in that trick question is that the zip ties around the cable bundles are irrelevant when it comes to physically replacing a network connection. Data center technicians use temporary cables to make a direct server-to-switch connection, and they schedule an appropriate time to perform a permanent replacement (which actually involves removing and replacing zip ties). In the video above, we show a temporary cable being installed in about 45 seconds, and we also demonstrate the process of creating, installing and bundling a permanent network cable replacement. Even with all of those villainous zip ties, everything is done in less than 18 minutes.

Many of the comments on YouTube bemoan the idea of having to replace a single cable in one of these zip-tied bundles, but as you can see, the process isn't very laborious, and it doesn't vary significantly from the amount of time it would take to perform the same maintenance with a Velcro®-secured cable bundle.

Zip ties are inferior to hook & loop ties for network cabling? Myth(s): Busted.

-@khazard

P.S. Shout-out to Elijah Fleites at DAL05 for expertly replacing the network cable on an internal server for the purposes of this video!

July 29, 2013

A Brief History of Cloud Computing

Believe it or not, "cloud computing" concepts date back to the 1950s when large-scale mainframes were made available to schools and corporations. The mainframe's colossal hardware infrastructure was installed in what could literally be called a "server room" (since the room would generally only be able to hold a single mainframe), and multiple users were able to access the mainframe via "dumb terminals" – stations whose sole function was to facilitate access to the mainframes. Due to the cost of buying and maintaining mainframes, an organization wouldn't be able to afford a mainframe for each user, so it became practice to allow multiple users to share access to the same data storage layer and CPU power from any station. By enabling shared mainframe access, an organization would get a better return on its investment in this sophisticated piece of technology.

Mainframe Computer

A couple decades later in the 1970s, IBM released an operating system called VM that allowed admins on their System/370 mainframe systems to have multiple virtual systems, or "Virtual Machines" (VMs) on a single physical node. The VM operating system took the 1950s application of shared access of a mainframe to the next level by allowing multiple distinct compute environments to live in the same physical environment. Most of the basic functions of any virtualization software that you see nowadays can be traced back to this early VM OS: Every VM could run custom operating systems or guest operating systems that had their "own" memory, CPU, and hard drives along with CD-ROMs, keyboards and networking, despite the fact that all of those resources would be shared. "Virtualization" became a technology driver, and it became a huge catalyst for some of the biggest evolutions in communications and computing.

Mainframe Computer

In the 1990s, telecommunications companies that had historically only offered single dedicated point–to-point data connections started offering virtualized private network connections with the same service quality as their dedicated services at a reduced cost. Rather than building out physical infrastructure to allow for more users to have their own connections, telco companies were able to provide users with shared access to the same physical infrastructure. This change allowed the telcos to shift traffic as necessary to allow for better network balance and more control over bandwidth usage. Meanwhile, virtualization for PC-based systems started in earnest, and as the Internet became more accessible, the next logical step was to take virtualization online.

If you were in the market to buy servers ten or twenty years ago, you know that the costs of physical hardware, while not at the same level as the mainframes of the 1950s, were pretty outrageous. As more and more people expressed demand to get online, the costs had to come out of the stratosphere, and one of the ways that was made possible was by ... you guessed it ... virtualization. Servers were virtualized into shared hosting environments, Virtual Private Servers, and Virtual Dedicated Servers using the same types of functionality provided by the VM OS in the 1950s. As an example of what that looked like in practice, let's say your company required 13 physical systems to run your sites and applications. With virtualization, you can take those 13 distinct systems and split them up between two physical nodes. Obviously, this kind of environment saves on infrastructure costs and minimizes the amount of actual hardware you would need to meet your company's needs.

Virtualization

As the costs of server hardware slowly came down, more users were able to purchase their own dedicated servers, and they started running into a different kind of problem: One server isn't enough to provide the resources I need. The market shifted from a belief that "these servers are expensive, let's split them up" to "these servers are cheap, let's figure out how to combine them." Because of that shift, the most basic understanding of "cloud computing" was born online. By installing and configuring a piece of software called a hypervisor across multiple physical nodes, a system would present all of the environment's resources as though those resources were in a single physical node. To help visualize that environment, technologists used terms like "utility computing" and "cloud computing" since the sum of the parts seemed to become a nebulous blob of computing resources that you could then segment out as needed (like telcos did in the 90s). In these cloud computing environments, it became easy add resources to the "cloud": Just add another server to the rack and configure it to become part of the bigger system.

Clouds

As technologies and hypervisors got better at reliably sharing and delivering resources, many enterprising companies decided to start carving up the bigger environment to make the cloud's benefits to users who don't happen to have an abundance of physical servers available to create their own cloud computing infrastructure. Those users could order "cloud computing instances" (also known as "cloud servers") by ordering the resources they need from the larger pool of available cloud resources, and because the servers are already online, the process of "powering up" a new instance or server is almost instantaneous. Because little overhead is involved for the owner of the cloud computing environment when a new instance is ordered or cancelled (since it's all handled by the cloud's software), management of the environment is much easier. Most companies today operate with this idea of "the cloud" as the current definition, but SoftLayer isn't "most companies."

SoftLayer took the idea of a cloud computing environment and pulled it back one more step: Instead of installing software on a cluster of machines to allow for users to grab pieces, we built a platform that could automate all of the manual aspects of bringing a server online without a hypervisor on the server. We call this platform "IMS." What hypervisors and virtualization do for a group of servers, IMS does for an entire data center. As a result, you can order a bare metal server with all of the resources you need and without any unnecessary software installed, and that server will be delivered to you in a matter of hours. Without a hypervisor layer between your operating system and the bare metal hardware, your servers perform better. Because we automate almost everything in our data centers, you're able to spin up load balancers and firewalls and storage devices on demand and turn them off when you're done with them. Other providers have cloud-enabled servers. We have cloud-enabled data centers.

SoftLayer Pod

IBM and SoftLayer are leading the drive toward wider adoption of innovative cloud services, and we have ambitious goals for the future. If you think we've come a long way from the mainframes of the 1950s, you ain't seen nothin' yet.

-James

Categories: 
February 8, 2013

Data Center Power-Up: Installing a 2-Megawatt Generator

When I was a kid, my living room often served as a "job site" where I managed a fleet of construction vehicles. Scaled-down versions of cranes, dump trucks, bulldozers and tractor-trailers littered the floor, and I oversaw the construction (and subsequent destruction) of some pretty monumental projects. Fast-forward a few years (or decades), and not much has changed except that the "heavy machinery" has gotten a lot heavier, and I'm a lot less inclined to "destruct." As SoftLayer's vice president of facilities, part of my job is to coordinate the early logistics of our data center expansions, and as it turns out, that responsibility often involves overseeing some of the big rigs that my parents tripped over in my youth.

The video below documents the installation of a new Cummins two-megawatt diesel generator for a pod in our DAL05 data center. You see the crane prepare for the work by installing counter-balance weights, and work starts with the team placing a utility transformer on its pad outside our generator yard. A truck pulls up with the generator base in tow, and you watch the base get positioned and lowered into place. The base looks so large because it also serves as the generator's 4,000 gallon "belly" fuel tank. After the base is installed, the generator is trucked in, and it is delicately picked up, moved, lined up and lowered onto its base. The last step you see is the generator housing being installed over the generator to protect it from the elements. At this point, the actual "installation" is far from over — we need to hook everything up and test it — but those steps don't involve the nostalgia-inducing heavy machinery you probably came to this post to see:

When we talk about the "megawatt" capacity of a generator, we're talking about the bandwidth of power available for use when the generator is operating at full capacity. One megawatt is one million watts, so a two-megawatts generator could power 20,000 100-watt light bulbs at the same time. This power can be sustained for as long as the generator has fuel, and we have service level agreements to keep us at the front of the line to get more fuel when we need it. Here are a few other interesting use-cases that could be powered by a two-megawatt generator:

  • 1,000 Average Homes During Mild Weather
  • 400 Homes During Extreme Weather
  • 20 Fast Food Restaurants
  • 3 Large Retail Stores
  • 2.5 Grocery Stores
  • A SoftLayer Data Center Pod Full of Servers (Most Important Example!)

Every SoftLayer facility has an n+1 power architecture. If we need three generators to provide power for three data center pods in one location, we'll install four. This additional capacity allows us to balance the load on generators when they're in use, and we can take individual generators offline for maintenance without jeopardizing our ability to support the power load for all of the facility's data center pods.

Those of you who are in the fondly remember Tonka trucks and CAT crane toys are the true target audience for this post, but even if you weren't big into construction toys when you were growing up, you'll probably still appreciate the work we put into safeguarding our facilities from a power perspective. You don't often see the "outside the data center" work that goes into putting a new SoftLayer data center pod online, so I thought it'd give you a glimpse. Are there an topics from an operations or facilities perspectives that you also want to see?

-Robert

Subscribe to data-center