culture

September 2, 2015

Backup and Restore in a Cloud and DevOps World

Virtualization has brought many improvements to the compute infrastructure, including snapshots and live migration1. When an infrastructure moves to the cloud, these options often become a client’s primary backup strategy. While snapshots and live migration are also part of a successful strategy, backing up on the cloud may need additional tools.

First, a basic question: Why do we take backups? They’re taken to recover from

  • The loss of an entire machine
  • Partially corrupted files
  • A complete data loss (either through hardware or human error)

While losing an entire machine is frightening, corrupted files or data loss are the more common reasons for data backups.

Snapshots are useful when the snapshot and restore occur in close proximity to each other, e.g., when you’re migrating middleware or an operating system and want to fall back quickly if something goes wrong. If you need to restore after extensive changes (hardware or data), a snapshot isn’t an adequate resource. The restore may require restoring to a new machine, selecting files to be restored, and moving data back to the original machine.

So if a snapshot isn’t the silver bullet for backing up in the cloud, what are the effective backup alternatives? The solution needs to handle a full system loss, partial data loss, or corruption, and ideally work for both virtualized and non-virtualized environments.

What to back up

There are three types of files that you’ll want to consider when backing up an active machine’s disks:

  • Binary files: Changed by operating system and middleware updates; can be easily stored and recovered.
  • Configuration files: Defined by how the binary files are connected, configured, and what data is accessible to them.
  • Data files: Generated by users and unrecoverable if not backed up. Data files are the most precious part of the disk content and losing them may result in a financial impact on the client’s business.

Keep in mind when determining your backup strategy that each file type has a different change rate—data files change faster than configuration files, which are more fluid than binary files. So, what are your options for backing up and restoring each type of file?

Binary files
In the case of a system failure, DevOps advocates (see Phoenix Servers from Martin Fowler) propose getting a new machine, which all cloud providers can automatically provision, including middleware. Automated provisioning processes are available for both bare metal and virtual machines.

Note that most Open Source products only require an Internet connection and a single command line for installation, while commercial products can be provisioned through automation.

Configuration files
Cloud-centric operations have a distinct advantage over traditional operations when it comes to backing up configuration files. With traditional operations, each element is configured manually, which has several drawbacks such as being time-consuming and error-prone. Cloud-centric operations, or DevOps, treat each configuration as code, which allows an environment to be built from a source configuration via automated tools and procedures. Tools such as Chef, Puppet, Ansible, and SaltStack show their power with central configuration repositories that are used to drive the composition of an environment. A central repository works well with another component of automated provisioning—changing the IP address and hostname.

You have limited control of how the cloud will allocate resources, so you need an automated method to collect the information and apply it to all the machines being provisioned.

In a cloud context, it’s suboptimal to manage machines individually; instead, the machines have to be seen as part of a cluster of servers, managed via automation. Cluster automation is one the core tenants of solutions like CoreOS’ Fleet and Apache Mesos. Resources are allocated and managed as a single entity via API, configuration repositories, and automation.

You can attain automation in small steps. Start by choosing an automation tool and begin converting your existing environment one file at a time. Soon, your entire configuration is centrally available and recovering a machine or deploying a full environment is possible with a single automated process.

In addition to being able to quickly provision new machines with your binary and configuration files, you are also able to create parallel environments, such as disaster recovery, test and development, and quality assurance. Using the same provisioning process for all of your environments assures consistent environments and early detection of potential production problems. Packages, binaries, and configuration files can be treated as data and stored in something similar to object stores, which are available in some form with all cloud solutions.

Data files
The final files to be backed up and restored are the data files. These files are the most important part of a backup and restore and the hardest ones to replace. Part of the challenge is the volume of data as well as access to it. Data files are relatively easy to back up; the exception being files that are in transition, e.g., files being uploaded. Data file backups can be done with several tools, including synchronization tools or a full file backup solution. Another option is object stores, which is the natural repository for relatively static files, and allows for a pay–as-you-go model.

Database content is a bit harder to back up. Even with instant snapshots on storage, backing up databases can be challenging. A snapshot at the storage level is an option, but it doesn’t allow for a partial database restore. Also, a snapshot can capture inflight transactions that can cause issues during a restore; which is why most database systems provide a mechanism for online backups. The online backups should be leveraged in combination with tools for file backups.

Something to remember about databases: many solutions end up accumulating data even after the data is no longer used by users. The data within an active database includes data currently being used and historical data. Having current and historical data allows for data analytics on the same database, it also increases the size of the database, making database-related operations harder. It may make sense to archive older data in either other databases or flat files, which makes the database volumes manageable.

Summary

To recap, because cloud provides rapid deployment of your operating system and convenient places to store data (such as object stores), it’s easy to factor cloud into your backup and recovery strategy. By leveraging the containerization approach, you should split the content of your machines—binary, configuration, and data. Focus on automating the deployment of binaries and configuration; it allows easier delivery of an environment, including quality assurance, test, and disaster recovery. Finally, use traditional backup tools for backing up data files. These tools make it possible to rapidly and repeatedly recover complete environments while controlling the amount of backed up data that has to be managed.

-Thomas

1 Snapshots are not available on bare metal servers that have no virtualization capability.

August 31, 2015

Data Ingestion and Access Using Object Storage

The massive growth in unstructured data (documents, images, videos, and so on) is one of the greatest problems facing today’s IT personnel. The challenge is storing all the data so that it and its storage solution can grow exponentially. Object storage is an ideal, cost-effective, scale-out solution for storing extensive amounts of unstructured data.

SoftLayer offers object storage based on the OpenStack Swift platform. Object storage provides a fully distributed, scalable, API-accessible storage platform that can be integrated directly into applications. It can be used for storing static data, such as virtual machine (VM) images, photos, emails, and so on. Click here for more information on object storage.

There are two important use cases when working with object storage: data ingestion and data access.

Data ingestion use case
A large medical research company needs to upload a large amount of data into their SoftLayer compute instance. The requirement is for a multi-hundred terabyte image repository that contains hundreds of millions of images. Researchers will then upload code to run on bare metal servers with GPUs to process the images in the repository. The images range from 512KB CT images to 30MB to 50 MB mammograms and are logically grouped into 12 million “studies.” The client wants to onboard the data as quickly as possible.

Recommendations

  • Evenly distribute the objects into approximately 1,000 containers for the initial upload. For the amount of objects the client needs to store, our tests have shown that having a much larger number of containers, or too few objects per container, would incur significant performance penalties. The proposed 1,000 containers allow for a good balance for parallelism in object creation and keeps the container sizes manageable.
  • Concurrently add new objects to all containers using 400 worker threads for small objects (e.g., 512KB CT images) and 40 worker threads for large objects (e.g., 30KB to 50KB mammograms). The ideal number of worker threads is dependent on the workload size. Using a minimal amount of threads results in better response but lower throughput. Using significantly more threads may lower both latency and throughput because the threads start competing for resources.

Data access use case
A large technology company has a mix of GET, PUT, and DELETE operations for which it needs object storage capable of holding billions of small objects (15KB or less). They also want consistent latencies for their operation mix (GET 54%, PUT 33%, and DELETE 13%), which requires optimal tuning for consistent performance. The client’s benchmarking calls for 1,400 operations per second.

Recommendations

  • Use multiple containers (at least 40) to improve the latency for PUT and DELETE objects. As long as the objects are distributed over at least 40 containers with a sufficient number of worker threads, the average latencies for PUT and DELETE objects was well below 100ms in our tests. There may be occasional latency spikes, which are not surprising on shared storage systems, but overall, the latencies should be relatively consistent.
    • Object reads (GETs) do not access containers. The read latency for a GET is very fast—less than 20ms on average for small objects.
  • Use multiple containers if very high throughput is needed. In our tests, we could drive more than 6,000 transactions per second on the production cluster with at least 40 containers. A shared production cluster has higher latencies than a dedicated environment, so more worker threads would be needed to achieve high throughput.

-Naeen Altaf & Khoa Huynh

Categories: 
August 28, 2015

Under the Infrastructure: It’s all about personality with server build technician Yoan-Aleksandar Spasov

Are you ready, folks? It’s time, once again, to lift our cloud high and put some SLayer sparkle into your sky. Last week, we went Under the Infrastructure to introduce you to Mathijs Dubbe, a sales engineer in Amsterdam. This week, we’re staying abroad in The Netherlands so you can meet Yoan-Aleksandar Spasov, a server build technician who’s been with us just shy of a year.

SoftLayer: Tell us about a day in the life of a server build technician.

Yoan-Aleksandar Spasov: It’s very different in Europe, because we rotate between three shifts depending on which month it is (as far as I know in the states, you get a permanent shift, so you only stay on that shift). We start in the mornings, evenings, or nights. You begin by picking up what’s left over from the shift before, so hopefully it’s not too big of a hand-off. We have a task list that lists the primaries and secondaries for each person on shift. Of course, there will be people who are better at transactions, hardware, or maintenance. So you get to do what you are good at, and you get to working. If you’ve been with Softlayer for a while, you’ll end up being good with everything.

SL: What shift are you on right now?

Spasov: I’m on the evening shift, so I start at 2 p.m. and I work until 11 p.m. Each shift is very different. During the day shift, you have management available to you so that you can do more projects. The evening shift is more customer-oriented because the states are just waking up, and we’re getting all those orders; there are a lot of builds and servers that need attention. The night shift is quiet and it’s mainly maintenance, so you have upgrades and things like that.

SL: We didn’t even think about that. That does make it pretty different.

Spasov: Yup.

SL: What’s the coolest thing about your job?

Spasov: There are so many things, to be honest. For me, it’s been awesome because I’m very young and I just started, so this is one of my first real real jobs. I had no real data center knowledge before I started. I started from scratch, and the whole team taught me. That’s one of the coolest parts of my job – you get awesome training. The other thing is that you get to work with amazing people and amazing teams. Everything else is hardware. We have awesome gear that you don’t get to see everywhere. It’s awesome. It’s amazing. It’s a privilege to work with that many components and that volume of components.

SL: How’d you get into this role? Since you didn’t have any prior data center experience, what’s your background?

Spasov: I had some hardware experience. I built PCs. I’ve always liked computers and electronics, and then I got into servers, and I’m learning something new every day.

SL: This piggybacks a bit on what we just talked about, but what does it take to become a server build technician? What kind of training, experience, or natural curiosities do you need?

Spasov: You must have amazing attention to detail; that’s very important. You have to follow protocols, which are there for a reason. You have to learn a lot. It’s not only just basic knowledge that you need to know, but it’s also the ability to find the knowledge and research it in the moment, whenever you have issues to deal with or any problems. You have to be able to reach out to other people and be able to look into documentation so we can learn from previous occurrences.

SL: Did you need a specific degree? We get this question a lot on our YouTube channel, and people are always asking, “How did you get that job? What kind of training do you need for that job? Where do you start for that kind of job? Do you have to go to school for this?”

Spasov: Having a technical degree or technical knowledge is good; that’s a definite plus. But even if you start without any hardware knowledge, you can build on the training from the company. It’s very specific with SoftLayer because we have our one-of-a-kind internal management system. You can’t learn about it anywhere else besides our company. If you knew other systems, you might try to draw parallels between the two, and that’s not going to work. It’s completely different. And that’s what makes SoftLayer so unique.

SL: Tell me something that you think nobody knows about being a server build technician.

Spasov: I have a feeling that a lot of customers think that there isn’t a person on the other side and that it’s all automated. But there’s a personality behind every update. There’s someone thinking about it and what to write and how to communicate with the customer to make them feel better, more secure, and to show that they’re in good hands.

SL: That’s a really good point. We’ll bet a lot don’t realize how many people go into making SoftLayer “SoftLayer.” It’s not just processes.

Spasov: That’s right.

SL: Do you have a plan in the event of a zombie apocalypse?

Spasov: I’m going to hide in the data center because I’m sure we’ll have the supplies. Our office manager stocks food for us, so I’m sure we’ll last a while.

SL: [laughing] That’s a good plan.

Those saucy SLayers get us every time.

We’re feelin’ it. Are you feelin’ it? (You know you are.) Then come back next week for the latest and greatest Under the Infrastructure, where we’re peeling back the cloud layer like it’s going out of style.

-Fayza

August 25, 2015

Free Resources for Your Startup

Building and running a startup is both difficult and expensive. From salary to servers to services, the demands on your budget are constant and come from all directions. On the Catalyst team we know this firsthand—our program was created as a way for startups to access SoftLayer's robust platform before they have revenue or funding.

After moving to Boulder, Colorado in 2012, the first startup I joined was a member of the Catalyst program. Without Catalyst, our organization would have been paying out of pocket for the bare metal servers we needed. Instead, that money was freed up for other essentials (like food to keep us alive).

Infrastructure isn't the only area in which startups can leverage free offerings. Since joining the Catalyst team one year ago, I've tracked and collected other free resources for startups. I compiled my research into a presentation that I've given at a few events. The presentation is available on SlideBean (a free online presentation platform, what else?) and is constantly being updated. Some highlights are below:

Big Company Programs
The Catalyst program is a model on how big companies can meaningfully engage with startups, and we're not the only ones doing it.

  • SVB: Silicon Valley Bank offers a program called Accelerator. Perks including free checking and financial mentorship. While saving on business checking won't make a big dent in your cash flow, the financial mentorship is top notch. The SVB team consists of experts in banking who can offer advice on fundraising, financial instruments, and cash management.
  • SendGrid: Email deliverability is crucial for your company, so start with the best in the business. The free plan includes 10,000 emails per month, up from 200 emails per day when I first started giving this talk. Go to the pricing page and scroll down to the bottom for the free plan. (Full disclosure: SendGrid is a former partner.)
  • NASDAQ Exact Equity: I was recently at a VC conference, where I had two separate conversations about investors’ frustrations with disorganized or downright undocumented cap tables. The NASDAQ Exact Equity freemium tool will not only help you wrangle your cap table, but it will also signal success to the investor by showing that you’re thorough and organized.

Startup Freebies
I'm not going to cover the basics, such as Evernote, Trello, Asana, Pivotal Tracker, Launch Rock, Bootstrap, Google Drive, etc. You probably already know about these programs. Instead, I’ll share a few great ones you may not know about.

  • Docracy: If you need any sort of legal document, Docracy should be your first stop. The legal documents were prepared by lawyers and are available for free. The choices range from SaaS Terms & Conditions to founder agreements.
  • HTML5 UP: Need a quick, easy, and responsive template for your site? When WordPress is too much of a hassle for a splash page, head over to HTML5 UP for dozens of choices of free templates.
  • UI Kit: As you're moving from the free HTML5 UP template toward being able to build out your site with the free Bootstrap toolkit, save yourself coding time and get the UI Kit for free design elements such as lightbox, slider, accordions, and more.
  • SlideBean: I love SlideBean. While searching for "free PowerPoint templates," I discovered that all the templates were hideous. Then I stumbled across SlideBean and fell in love with it. It makes putting together a presentation quick and easy, and keeps it from looking like you traveled to 1999 to get your template.

Collections
Below are my favorite collections of resources for any freebies that I haven’t already covered.

  • Product Hunt List: The founder of CrazyEgg and KISSmetics has an exhaustive list of free and freemium products for your startup.
  • Freebie.supply: Over 400 resources are grouped by category. I especially love the design resources.
  • Startup Stash: Not all of the free deals, mostly in the form of percentage discounts. But if you're going to pay for something, check F6S first for a discount.

And finally, the best piece of advice when trying to save money can be found in my last post: A Grandmother’s Advice for Startups: You never know ‘til you ask.

Have a free resource that you absolutely love that’s missing from my list? Email me at rmaloy@softlayer.com or tweet me @stoneybaby and let me know!

-Rich

August 21, 2015

Under the Infrastructure: Get International with Sales Engineer Mathijs Dubbe

Did you have oh-so-much fun meeting client services rep Neil Thomas last week? We sure hope so.

The fun continues because now you’re in for another sweet SLayer treat. This week in Under the Infrastructure, peek into the world of sales engineer Mathijs Dubbe. He’s based in Amsterdam and has been holding down the fort there since April 2015.

SoftLayer: How’d you end up at SoftLayer, Mathijs?

Mathijs Dubbe: I was an infrastructure and data services consultant at a data center and cloud hosting provider in the Netherlands, so [the sales engineer opportunity at SoftLayer] was pretty similar to what I was already doing. I’d known [about SoftLayer] for quite a while already. I’d seen it before and checked out what they were doing, and it sounded like fun. I’d seen the YouTube videos, with truck days and setting up pods, and that appealed to me. It was innovative.

SL: What does a typical day look like at SoftLayer in your shoes?

Dubbe: When I get to the office, I look at the tickets that remain from the last shift and clean them up. I’ll start my day by checking my email and seeing what my colleagues in Amsterdam are up to. During the day, there will be conference calls and meetings, things like that.

SL: How many black SoftLayer shirts do you own?

Dubbe: Three.

SL: That’s pretty good. Your collection is getting started! At this point, you’re still wearing other clothes to work besides SoftLayer shirts? Because there are some people who only wear SoftLayer gear.

Dubbe: When I have enough shirts, I’ll probably do that [laughs]. I’m currently in the IBM building, so I like to show off the brand.

SL: You’ve gotta represent, right?

Dubbe: Yeah.

SL: What have you learned working at SoftLayer?

Dubbe: A lot of stuff, actually. Related to international business, my former employer was fairly regional, but at SoftLayer, there are many international customers and that’s quite fun. I’ve learned about different kinds of people with different languages and accents; people working in Israel on Sundays. In a technical sense, it’s similar to what I did, but the technical stuff is always architected in a different way. I’ve learned quite a bit since I got here.

SL: We agree with your point about the international scale. You’re dealing with an office in Singapore and an office in Amsterdam and dealing with different languages and everyone in between, so it’s pretty dynamic.

Dubbe: I like that, too.

SL: What was the last costume that you wore?

Dubbe: [laughs] Costume? I dressed up like a road worker once.

SL: You did? For what?

Dubbe: For Carnival in February. I’m not usually the kind of guy that goes [to those sorts of things], but sometimes it’s fun. It’s not like anything they have in Brazil, though.

SL: That sounds like a really good time.

Aren’t SLayers the greatest? (We know you’re nodding.) That’s why you’ll want to stay tuned for our next installment of Under the Infrastructure, where we’ll wade waist-deep into the SLayer cloud.

-Fayza

August 19, 2015

Selling Cloud in the Cloud

Conventionally, the sales department’s style consists of men and women dressed to the nines—tailored suits and expensive Italian shoes. Appearance is a key factor to success, and most follow and master S.C.O.T.S.M.A.N., a forecasting and qualification system that stands for scope, competition, originality, time, size, money, authority, and need (we’ll get to this later).

Once a prospective client is qualified, they are placed in the sales funnel, and that’s when the fun begins. Deals are made over fancy dinners or a round of golf. Factors like location and size determine the time it takes to close the deal. Sometimes it takes days. Sometimes it takes months.

But in the cloud industry, where “on-demand” is the name of the game, following the conventional sales process can be a bottleneck in and of itself. Cloud, by its very nature, allows for spontaneity—such that by the time conventional salesmen arrange a wine-and-dine meeting, the new breed of cloud salesmen have already closed the deal and happy customers are accessing their deployed servers.

In the cloud, there’s no time for face-to-face with the customer, so most opt for comfortable t-shirts and jeans over tailored suits and fine Italian leather shoes. And in the absence of these things, products and services provide the wow factor that lure customers.

SoftLayer offers a variety of wow factors. Depending on how the customer will be using servers based on their business, any of the points below (or combination of them) could serve as a wow factor:

  • Free incoming and server-to-server data transfer, as well as bandwidth pooling
  • Auto Scale and rapid deployment (virtual servers in as little as five to 10 minutes and bare metal in as little as one to four hours)
  • Free, premium, round-the-clock technical, billing, and sales support
  • Complete control and flexibility
  • Dedicated basic server resources, including CPU, RAM and storage, on all server types
  • No long-term contracts
  • Seamless connectivity between virtual and bare metal platforms

Although the cloud marketplace today looks saturated, the providers offering genuine cloud services are few and far between, and the disparity between the services they offer are abundant, laying waste to the theory that cloud is a commodity. Today, there are ads boasting price advantages and overtly dramatized high-pitch marketing punch lines promising unrealistic offerings. Remember the old adage, “The bitterness of low quality remains long after the sweetness of low price is forgotten.”

Given the control and flexibility inherent in the SoftLayer platform with no contract to tie you down, the SoftLayer sales process cuts through the clutter and seeks to satisfy the last three elements of the S.C.O.T.S.M.A.N. system:

  • Is there a need for a new cloud environment? Are you looking to host a new application or are you looking to move an existing application from another hosting provider or from an in-house environment? If existing, what are your primary reasons for wanting to move?
  • Do you have a budget for a new cloud environment?
  • Do you have the authority to place an order on behalf of your organization? (To make it easy, it will cost $0.00 for the first month with a no-strings attached cancellation if you’re not satisfied.)

Some cloud consumers I have spoken to confess to choosing their hosting provider based on convenience rather than value offering. Cloud providers who are household names try as much as they can to blur the differences between offerings and propagate the doctrine of cloud being a commodity. As the cloud marketplace matures, the success of this strategy has an expiration date—soon!

- Valentine Che, Global Sales, AMS01

Categories: 
August 17, 2015

ImageNet Machine-Vision Competitors to Receive GPU-Enabled Bare Metal Cloud Servers from SoftLayer and NVIDIA

For the first time in the history of the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), this year’s qualifying participants will receive free use of bare metal cloud servers equipped with two NVIDIA Tesla K80 dual-GPU accelerators, provided by IBM Cloud and NVIDIA.

Kicking off last Friday, the ILSVRC is an annual object-detection and image-classification competition intended to advance the fields of machine learning and pattern recognition. It’s hosted by the University of North Carolina (UNC), Stanford University, and the University of Michigan.

Over the next three months, teams from around the world will compete to detect, locate, and classify patterns within a huge set of images taken from Internet sources that are tagged with metadata by human volunteers. The overall goal is to develop the most accurate image recognition algorithms with the lowest percentage of classification errors. To read more about the competition, visit NVIDIA’s recent Parallel Forall blog post and the ILSVRC 2015 home page.



Examples of ImageNet images demonstrating classification with localization.

The combination of SoftLayer servers and Tesla K80 GPUs gives teams the most powerful supercomputing cloud servers available in the marketplace today. To give you a quick overview of the specs, each bare metal cloud server comes with:

  • Two NVIDIA Tesla K80 GPU Accelerators
  • Dual Intel Xeon E5-2690 CPUs
  • 128GB RAM, and
  • Two 1TB SATA HDD/RAID 0.

By offering these cloud resources to ILSVRC teams, we’re helping pave the way for advances in the fields of machine learning and deep learning. We’re looking forward to seeing how these teams leverage our powerful, scalable, and secure cloud platform to develop innovative new methods for training deep neural networks.

Our support of this year’s ILSVRC adds to IBM’s rich legacy of providing innovative resources in the machine learning space, including IBM Watson and other software and services. ILSVRC teams are welcome to leverage third-party resources in their approaches, including the IBM Watson Visual Recognition Service, available on IBM Bluemix, and AlchemyVision from AlchemyAPI, an IBM Company.

If you’re interested in joining the competition and getting complimentary access to SoftLayer cloud servers with NVIDIA Tesla K80 GPUs, go to the ILSVRC 2015 home page and register your team. Once accepted into the competition, team leaders will be provided with access methods and credentials by NVIDIA and IBM.

And stay tuned for competition highlights as the ILSVRC continues over the next three months. Winners will be announced in November. Best of luck to all the competitors!

More About IBM Cloud Resources
While IBM Cloud is offering free resources to qualifying ILSVRC participants, the same GPU-enabled bare metal servers are also available to all of our customers in any of IBM Cloud’s SoftLayer data centers. These resources — along with SoftLayer’s high bandwidth, low-latency network, high-performance storage, and data ingestion options like Aspera, Direct Link, and data transfer service — make IBM Cloud the ideal choice for machine-learning deployments in the cloud. To learn more, visit https://www.softlayer.com/gpu.

-Betsy

Categories: 
August 14, 2015

Under the Infrastructure: Nerding out with Client Services Rep Neil Thomas

Sure, we know SoftLayer is your most favorite cloud provider under the sun. (And we totally heart you back.) But how well do you know us—the individual brains and brawn beneath our cloud? Yeah, we had a feeling you’d give us that blank look. Luckily for you, we’re going to fix that snafu. Starting right now.

Today we're launching a series that’ll introduce us to you, one SLayer at a time. Enter ”Under the Infrastructure.” We SLayers are a diverse, fascinating, and storied bunch. So come on in, kick off your shoes, and get to know the gang.

To kick things off, you’re going to meet Neil Thomas, a client services representative who has been stationed at our global headquarters in Dallas (DAL11, for those keeping score at home) for six months.



“That’s Liam. He’s a chunk, and outside of work, he’s my whole world.”

SoftLayer: So, Neil, tell us about a day in the life of a client services representative.

Neil Thomas: The client services team is responsible for many things. The most important one being, in my opinion, customer education. We are tasked with contacting new customers at set intervals (five days, 30 days, and 90 days from account creation) and making sure they stay informed on the platform's offerings and capabilities. I come in each day, log into all my tools and websites, and start calling new customers—anywhere from 30 to 80 customers a day. We also help identify new sales leads and handle some customer complaints, as long as they don't require a representative from accounting or support.

SL: So your inbox is definitely not at zero.

Thomas: Correct! It's busy, but it's satisfying being able to help customers with what they need.

SL: What's your favorite thing about being a SLayer, half a year in?

Thomas: Everyone here seems die-hard dedicated to what they do, and that seems to bring the whole team closer together. I love that for such a large company, everyone seems so close-knit. Coming from a 50-employee MSP, I didn't think I would find that here.

SL: That is definitely the SoftLayer way!

Thomas: And everyone seems to actually care about what the customer is going through and what the customer needs. Most companies tout that they are about that, when in reality, it's all bottom line.

SL: What have you learned since working at SoftLayer?

Thomas: I come from a technical background, having been a systems administrator and working a ticket queue. While I was comfortable talking on the phone and handling customer service needs, I've really had to develop my interpersonal skills to engage the customer and get them to open up. The SoftLayer employee atmosphere has helped me do just that. I didn't have much sales experience, and the guys in the sales department have really helped me understand what it's like to have a good conversation with a customer.

SL: Was it difficult for you?

Thomas: It was difficult at first, but it gets easier every day. There's a tremendous amount of support from my teammates and leadership to help me grow in the ways that I need to grow.

SL: Describe your work space for us.

Thomas: I'm a nerd. Always have been, always will be. My cube has a plush Tux (the Linux mascot), a remote controlled Ferrari Enzo, and a few collectors' edition PEZ dispenser sets. The cubes are low enough to socialize with employees or pop up for a quick question, but not tall enough to make you feel isolated from the rest of the world, like a normal cube farm would be.

SL: If we weren't all nerds, we wouldn't work at SoftLayer, right? Nerds are the best.

Thomas: I wholeheartedly agree.

SL: What would you do if you were the lone survivor in a plane crash?

Thomas: Everyone says that you should buy a lottery ticket in situations like that. I think it should be the opposite, because if you've survived a plane crash, then obviously that's sucked up most of your luck.

SL: Good point.

Thomas: Assuming I'd crashed in a place that was an easy rescue, or had been randomly happened upon were it to crash on a deserted island, I'd more than likely take a long time off and spend it with my wife and my son, Liam. I'm a workaholic, though, so even if I got a book or movie deal, I'd still keep my day job and work the rest of my life.

SL: Would you make up a Lost-type story or would it be strictly factual?

Thomas: It would probably end up being a mix of both. The systems admin in me would want to stick to the facts, while the sci-fi nerd in me would want to embellish. I'd probably throw a mix together and let people’s imaginations run wild.

SL: You gotta take creative license when the situation permits.

Thomas: Definitely.

And that’s just the tip of the iceberg, folks. Join us for our next segment of Under the Infrastructure, where we’ll keep diving into the deepest depths of the cloud, SLayer by SLayer.

-Fayza

August 12, 2015

Network Performance 101: What is latency, and why does it matter?

We’ve all been there. Waiting for a web page to load can be so frustrating that we end up just closing out. You might ask yourself, “Hey, I have high-speed Internet. Why is this happening to me?” Well, there are a lot of factors outside your control that … control page loads. And whether you have an online store, run big data solutions, or have your employees set up on a network accessing files around the world, you never want to hear that your data, consumer products, information, or otherwise, is keeping you from a sale or slowing down employee productivity because of slow data transfer.

So why are some pages so much slower to load than others?
It could be that poorly written code or large images are slowing the load on the backend, but slow page loads can also be caused by network latency. This might sound elementary, but data is not just floating out there in some non-physical Internet space. In reality, data is stored on hard drives … somewhere. Network connectivity provides a path for that data to travel to end users around the world, and that connectivity can vary significantly—depending on how far it’s going, how many times the data has to hop between service providers, how much bandwidth is available along the way, the other data traveling across the same path, and a number of other variables.

The measurement of how quickly data travels between two connected points is called network latency. Network latency is an expression of the amount of time it takes a packet of data to get from one place to another.

Understanding Network Latency
Theoretically, data can travel at the speed of light across optical fiber network cables, but in practice, data typically travels slower than light due to the variables we referenced in the previous section. If a network connection doesn’t have any available bandwidth capacity, data might temporarily queue up to wait for its turn to travel across the line. If a service provider’s network doesn’t route a network path optimally, data could be sent hundreds or thousands of miles away from the destination in the process of routing to the destination. These kinds of delays and detours lead to higher network latency, which lead to slower page loads and download speeds.

We express network latency in milliseconds (that’s 1,000 milliseconds per second), and while a few thousandths of a second may not mean much to us as we’re living our daily lives, those milliseconds are often the deciding factors for whether we stay on a webpage or give up and try another site. As consumers of high-speed Internet, we like what we like, and we want what we want when we want it. In the financial sector, milliseconds can mean billions of dollars in gains or losses from trade transactions on a day-to-day basis.

Logical conclusion: Everyone wants the lowest network latency to the greatest number of users.

Common Approaches to Minimize Network Latency
If our shared goal is to minimize latency for our data, the most common approaches to addressing network latency involve limiting the number of potential variables that can impact the speed of data’s movement. While we don’t have complete control over how our data travels across the Internet, we can do a few things to keep our network latency in line:

  • Distribute data around the world: Users in different locations can pull data from a location that’s geographically close to them. Because the data is closer to the users, it is handed off fewer times, it has a shorter distance to travel, and inefficient routing is less likely to cause a significant performance impact.
  • Provision servers with high-capacity network ports: Huge volumes of data can travel to and from the server every second. If packets are delayed due to fully saturated ports, milliseconds of time pass, pages load slower, download speeds drop, and users get unhappy.
  • Understand how your providers route traffic: When you know how your data is transferred to users around the world, you can make better decisions about where you host your data.

How SoftLayer Minimizes Network Latency
To minimize latency, we took a unique approach to building our network. All of our data centers are connected to network points of presence. All of our network points of presence are connected to each other via our global backbone network. And by maintaining our own global backbone network, our network operations team is able to control network paths and data handoffs much more granularly than if we relied on other providers to move data between geographies.

SoftLayer Private Network

For example, if a user in Berlin wants to watch a cat video hosted on a SoftLayer server in Dallas, the packets of data that make up that cat video will travel across our backbone network (which is exclusively used by SoftLayer traffic) to Frankfurt, where the packets would be handed off to one of our peering or transit public network partners to get to the user in Berlin.

Without a global backbone network, the packets would be handed off to a peering or transit public network provider in Dallas, and that provider would route the packets across its network and/or hand the packets off to another provider at a network hop, and the packets would bounce their way to Germany. It’s entirely possible that the packets could get from Dallas to Berlin with the same network latency with or without the global backbone network, but without the global backbone network, there are a lot more variables.

In addition to building a global backbone network, we also segment public, private, and management traffic onto different network ports so that different types of traffic can be transferred without interfering with each other.

SoftLayer Private Network

But at the end of the day, all of that network planning and forethought doesn’t amount to a hill of beans if you can’t see the results for yourself. That’s why we put speed tests on our website so you can check out our network yourself (for more on speed tests, check out this blog post).

TL;DR: Network Latency
Your users want your data as quickly as you can get it to them. The time it takes for your data to get to them across the Internet is called network latency. The more control you (or your provider) have over your data’s network path, the more consistent (and lower) your network latency will be.

Stay tuned. Next month we will be discussing Network Performance 101: Security, where we’ll discuss all things cloud security—including answering your burning questions: Can other people see or access my data in a public cloud? Is my data more prone to hackers? And, what safeguards do SoftLayer have in place to protect data?

-JRL

August 11, 2015

The SLayer Standard Vol. 1, No. 14

The week in review. All the IBM Cloud and SoftLayer headlines in one place.

We’re revving the IBM Cloud engine.
How is SoftLayer helping IBM’s cloud grow? Ed Scannell explores this in a new TechTarget article. He says many of the latest successes are “attributed to the IBM cloud unit's ability to respond faster to market opportunities, along with the ability to build corporate data centers significantly faster than IGS via SoftLayer.”

It’s time to turn to the cloud.
Across the industry, companies are seeing legacy software decreases. In a recent CBR article, James Nunns says he believes the solution could be in the cloud, and he highlights some of the transitions that IBM is making. Steve Robinson, IBM’s general manager of cloud platform services, says, "Today's rapid app development cycles require developers to use new tools and methodologies from across the ecosystem to quickly turn new ideas into enterprise-class cloud applications at consumer scale and innovate at the speed of cloud."

A case for both private and public cloud.
Are you still writing a pros and cons list to compare private and public cloud? It’s time to put the list away. IBMer Philip Guido explains, “Over the next five years, both public and private clouds are expected to grow at the exact same compound annual growth rate.” One thing to remember is that the choice of cloud model is “largely predicated by the business conditions of the industry a company is operating in.”

-Rachel

Categories: 

Pages

Subscribe to culture