Author Archive: Nathan Day

June 13, 2012

SoftLayer Private Clouds - A Cloud to Call Your Own

Those of us who've been in this industry for years have seen computing evolve pretty significantly, especially recently. We started with dedicated servers running a single operating system, and we were floored by innovations that allowed dedicated servers to run a hypervisor with many operating systems. The next big leap brought virtual machine "cloud" instances into the spotlight ... And the resulting marketing shenanigans have been a blessing and a curse. On the positive side, the approachable "cloud" term is a lot easier to talk about with a nontechnical audience, but on the negative side, we see uninformative TV commercials that leverage cloud as a marketing term, and we see products that further obfuscate what cloud technology actually means:

Cloud Phone?

To make sure we're all on the same page, as we continue to talk about "cloud," our definition is pretty straightforward:

  • It's an operations model.
  • It provides capacity on demand.
  • It offers consumption-based pricing.
  • It features self-service provisioning.
  • It can be accessed and managed via an API.

Understanding those characteristics, when you hear about cloud in the hosting industry, you're usually hearing about cloud computing instances in a public cloud environment. An instance in a public cloud is one of many instances operating on a shared cloud infrastructure alongside other similar instances that aren't managed by you. Your data is still secure, and you can still get good performance in a public cloud environment, but you're not managing the cloud infrastructure on which your instance resides ... You're using a piece of a cloud.

What we announced at Cloud Expo East is the next step in the evolution of technology in our industry ... We're providing a turnkey, on-demand way for our customers to provision their own Private Clouds with Citrix CloudPlatform, powered by Apache CloudStack.

You don't get a piece of the cloud. You have your own cloud, provisioned in a matter of hours on a month-to-month contract.

For those who have looked into building a private cloud for their business in the past, it's probably worth reiterating: With SoftLayer and CloudStack, you can have a geographically distributed, secure, private cloud environment provisioned in a matter of hours (not months). Given the complexity of a private cloud environment — involving a management server, private cloud zones, host servers and object storage — this is no small feat.

SoftLayer Private Clouds

Those unbelievable provisioning times are only part of the story ... When that cloud infrastructure is deployed quickly, it's fully integrated into the SoftLayer platform, so it leverages our global private network alongside your existing bare metal, dedicated and virtual servers. Want to add public cloud instances to your private cloud as web heads? You'll log into one portal or use a singular API to have that done in an instant.

Your own cloud infrastructure, fully integrated into SoftLayer's global infrastructure. If you're chomping at the bit to try it out for yourself, email us at privateclouds@softlayer.com, and we'll get you on the "early access" list.

Before I sign off, I want to be sure to thank everyone at SoftLayer and Citrix who worked so hard to make SoftLayer Private Clouds such an amazing new addition to our platform.

-@nday91

May 14, 2012

Synergy and Cloud - Going Beyond the Buzzwords

Citrix Synergy 2012 took over San Francisco this week. Because Citrix is one of SoftLayer's technology partners, you know we were in the house, and I thought I'd share a few SoftLayer-specific highlights from the conference.

Before I get too far, I should probably back up give you a little context for what the show is all about if you aren't familiar with it. In his opening keynote, Citrix CEO Mark Templeton explained:

"We call it 'Citrix Synergy,' but really it's 'Synergy' because this is an event that's coordinated by us across a hundred sponsors, our ecosystem partners, companies in the industry that we work together with to bring you an amazing set of solutions around cloud, virtualization, networking and mobility."

Given how broad of a spectrum those areas of technology represent, the short four-day agenda was jam-packed with informational sessions, workshops, demos and conversations. It goes without saying that SoftLayer had to be in the mix in a BIG WAY. We had a booth on the expo hall floor, I was lined up to lead a breakout session about how business can "learn how to build private clouds in the cloud," and we were the proud presenting sponsor of the huge Synergy Party on Thursday night.

Our partnership with Citrix is unique. We incorporate Citrix NetScaler and Citrix XenServer as part of our service offerings. Plus, Citrix is also a SoftLayer customer, using SoftLayer infrastructure to offer a hosted desktop solution. Designed and architected from the ground up to run in the cloud, the Citrix Virtual Demo Center provides a dashboard interface for managing Citrix XenDesktop demo environments that are provisioned on-demand using SoftLayer's infrastructure.

My biggest thrill at the conference came when I was asked to speak and share a little of our expertise in a keynote address on simplifying cloud networking. I like to tell people I have a great face for radio, but that didn't keep me off the stage. The hall was packed to capacity and after defeating a few "demo gremlins," I got to show off how easy SoftLayer makes it for our customers to take advantage of amazing products like Citrix Netscaler VPX:

In my "Learn How to Build Private Clouds in the Cloud" breakout session, I had a little more time to speak to the larger question of how SoftLayer is approaching the shift to cloud-specific architectures and share some best practices in moving to a private cloud. Private clouds are a great way to provide real-time service delivery of IT resources with a single-tenant, customized, secure environment. However, the challenge of scaling and managing physical resources still exists, so I tried to explain how businesses can leverage an Infrastructure-as-a-Service provider to add scalability to a private cloud environment.

Thanks to SynergyTV, that presentation has been made available for all to see:

As I joked at the beginning of the breakout session, an attendee at Citrix Synergy was probably bombarded by "the cloud" in presentations and conversations at the show. While it's important to demystify the key terms we use on a daily basis, a few straight days of keynotes and breakout sessions about the cloud can get you thinking, "All work and no play makes Jack a dull boy." Beyond our capabilities as a cloud infrastructure provider, SoftLayer knows how to have a good time, so after we took care of the "work" stuff in the sessions above, we did our best to help provide a little "play" as well. This year, we were the proud sponsor of the Synergy Party, featuring Lifehouse!

Citrix Synergy 2012 was a blast. As a former rocket scientist, I can say that authoritatively.

-@nday91

October 18, 2011

Adding 'Moore' Storage Solutions

In 1965, Intel co-founder Gordon Moore observed an interesting trend:"The complexity for minimum component costs has increased at a rate of roughly a factor of two per year ... Certainly over the short term this rate can be expected to continue, if not to increase."

Moore was initially noting the number of transistors that can be placed on an integrated circuit at a relatively constant minimal cost. Because that measure has proven so representative of the progress of our technological manufacturing abilities, "Moore's Law" has become a cornerstone in discussions of pricing, capacity and speed of almost anything in the computer realm. You've probably heard the law used generically to refer to the constant improvements in technology: In two years, you can purchase twice as much capacity, speed, bandwidth or any other easily-measureable and relevant technology metric for the price you would pay today and for the current levels of production.

Think back to your first computer. How much storage capacity did it have? You were excited to be counting in bytes and kilobytes ... "Look at all this space!" A few years later, you heard about people at NASA using "gigabytes" of space, and you were dumbfounded. Fastforward a few more years, and you wonder how long your 32GB flash drive will last before you need to upgrade the capacity.

32GB Thumb Drive

As manufacturers have found ways to build bigger and faster drives, users have found ways to fill them up. As a result of this behavior, we generally go from "being able to use" a certain capacity to "needing to use" that capacity. From a hosting provider perspective, we've seen the same trend from our customers ... We'll introduce new high-capacity hard drives, and within weeks, we're getting calls about when we can double it. That's why we're always on the lookout for opportunities to incorporate product offerings that meet and (at least temporarily) exceed our customers' needs.

Today, we announced Quantastor Storage Servers, dedicated mass storage appliances with exceptional cost-effectiveness, control and scalability. Built on SoftLayer Mass Storage dedicated servers with the OS NEXUS QuantaStor Storage Appliance OS, the solution supports up to 48TB of data with the perfect combination of performance economics, scalability and manageability. To give you a frame of reference, this is 48TB worth of hard drives:

48TB

If you've been looking for a fantastic, high-capacity storage solution, you should give our QuantaStor offering a spin. The SAN (iSCSI) + NAS (NFS) storage system delivers advanced storage features including, thin-provisioning, and remote-replication. These capabilities make it ideally suited for a broad set of applications including VM application deployments, virtual desktops, as well as web and application servers. From what I've seen, it's at the top of the game right now, and it looks like it's a perfect option for long-term reliability and scalability.

-@nday91

October 11, 2011

Building a True Real-Time Multiplayer Gaming Platform

Some of the most innovative developments on the Internet are coming from online game developers looking to push the boundaries of realism and interactivity. Developing an online gaming platform that can support a wide range of applications, including private chat, avatar chats, turn-based multiplayer games, first-person shooters, and MMORPGs, is no small feat.

Our high speed, global network significantly minimizes reliability, access, latency, lag and bandwidth issues that commonly challenge online gaming. Once users begin to experience issues of latency, reliability, they are gone and likely never to return. Our cloud, dedicated, and managed hosting solutions enable game developers to rapidly test, deploy and manage rich interactive media on a secure platform.

Consider the success of one of our partners — Electrotank Inc. They’ve been able to support as many as 6,500 concurrent users on just ONE server in a realistic simulation of a first-person shooter game, and up to 330,000 concurrent users for a turn-based multiplayer game. Talk about server density.

This is just scratching the surface because we're continuing to build our global footprint to reduce latency for users around the world. This means no awkward pauses, jumping around, but rather a smooth, seamless, interactive online gaming experience. The combined efforts of SoftLayer’s infrastructure and Electrotank’s performant software have produced a high-performance networking platform that delivers a highly scalable, low latency user experience to both gamers and game developers.

Electrotank

You can read more about how Electrotank is leveraging SoftLayer’s unique network platform in today's press release or in the fantastic white paper they published with details about their load testing methodology and results.

We always like to hear our customers opinions so let us know what you think.

-@nday91

July 25, 2011

Under the Hood of 'The Cloud'

When we designed the CloudLayer Computing platform, our goal was to create an offering where customers would be able to customize and build cloud computing instances that specifically meet their needs: If you go to our site, you're even presented with an opportunity to "Build Your Own Cloud." The idea was to let users choose where they wanted their instance to reside as well as their own perfect mix of processor power, RAM and storage. Today, we're taking the BYOC mantra one step farther by unveiling the local disk storage option for CloudLayer computing instances!

Local Disk

For those of you familiar with the CloudLayer platform, you might already understand the value of a local disk storage option, but for the uninitiated, this news presents a perfect opportunity to talk about the dynamics of the cloud and how we approach the cloud around here.

As the resident "tech guy" in my social circle, I often find myself helping friends and family understand everything from why their printer isn't working to what value they can get from the latest and greatest buzzed-about technology. As you'd probably guess, the majority of the questions I've been getting recently revolve around 'the cloud' (thanks especially to huge marketing campaigns out of Redmond and Cupertino). That abstract term effectively conveys the intentional sentiment that users shouldn't have to worry about the mechanics of how the cloud works ... just that it works. The problem is that as the world of technology has pursued that sentiment, the generalization of the cloud has abstracted it to the point where this is how large companies are depicting the cloud:

Cloud

As it turns out, that image doesn't exactly illicit the, "Aha! Now I get it!" epiphany of users actually understanding how clouds (in the technology sense) work. See how I pluralized "clouds" in that last sentence? 'The Cloud' at SoftLayer isn't the same as 'The Cloud' in Redmond or 'The Cloud' in Cupertino. They may all be similar in the sense that each cloud technology incorporates hardware abstraction, on-demand scalability and utility billing, but they're not created in the same way.

If only there were a cloud-specific Declaration of Independence ...

We hold these truths to be self-evident, that all clouds are not equal, that they are endowed by their creators with certain distinct characteristics, that among these are storage, processing power and the ability to serve content. That to secure these characteristics, information should be given to users, expressed clearly to meet the the cloud's users;

The Ability to Serve Content
Let's unpack that Jeffersonian statement a little by looking at the distinct characteristics of every cloud, starting with the third ("the ability to serve content") and working backwards. Every cloud lives on hardware. The extent to which a given cloud relies on that hardware can vary, but at the end of the day, you &nash; as a user – are not simply connecting to water droplets in the ether. I'll use SoftLayer's CloudLayer platform as a specific example of that a cloud actually looks like: We have racks of uniform servers – designated as part of our cloud infrastructure – installed in rows in our data centers. All of those servers are networked together, and we worked with our friends at Citrix to use the XenServer platform to tie all of those servers together and virtualize the resources (or more simply: to make each piece of hardware accessible independently of the rest of the physical server it might be built into). With that infrastructure as a foundation, ordering a cloud server on the CloudLayer platform simply involves reserving a small piece of that cloud where you can install your own operating system and manage it like an independent server or instance to serve your content.

Processing Power
Understanding the hardware architecture upon which a cloud is built, the second distinct characteristic of every cloud ("processing power") is fairly logical: The more powerful the hardware used for a given cloud, the better processing performance you'll get in an instance using a piece of that hardware.

You can argue about what software uses the least resources in the process of virtualizing, but apples-to-apples, processing power is going to be determined by the power of the underlying hardware. Some providers try to obfuscate the types of servers/processors available to their cloud users (sometimes because they are using legacy hardware that they wouldn't be able to sell/rent otherwise), but because we know how important consistent power is to users, we guarantee that CloudLayer instances are based on 2.0GHz (or faster) processors.

Storage
We walked backward through the distinct characteristics included in my cloud-specific Declaration of Independence because of today's CloudLayer Computing storage announcement, but before I get into the details of that new option, let's talk about storage in general.

If the primary goal of a cloud platform is to give users the ability to scale instantly from 1 CPU of power to 16 CPUs of power, the underlying architecture has to be as flexible as possible. Let's say your cloud computing instance resides on a server with only 10 CPUs available, so when you upgrade to a 16-CPU instance, your instance will be moved to a server with enough available resources to meet your need. To make that kind of quick change possible, most cloud platforms are connected to a SAN (storage area network) or other storage device via a back-end network to the cloud servers. The biggest pro of having this setup is that upgrading and downgrading CPU and RAM for a given cloud instance is relatively easy, but it introduces a challenge: The data lives on another device that is connected via switches and cables and is being used by other customers as well. Because your data has to be moved to your server to be processed when you call it, it's a little slower than if a hard disk was sitting in the same server as the instance's processor and RAM. For that reason, many users don't feel comfortable moving to the cloud.

In response to the call for better-performing storage, there has been a push toward incorporating local disk storage for cloud computing instances. Because local disk storage is physically available to the CPU and RAM, the transfer of data is almost immediate and I/O (input/output) rates are generally much higher. The obvious benefit of this setup is that the storage will perform much better for I/O-intensive applications, while the tradeoff is that the setup loses the inherent redundancy of having the data replicated across multiple drives in a SAN (which, is almost like its own cloud ... but I won't confuse you with that right now).

The CloudLayer Computing platform has always been built to take advantage of the immediate scalability enabled by storing files in a network storage device. We heard from users who want to use the cloud for other applications that they wanted us to incorporate another option, so today we're happy to announce the availability of local disk storage for CloudLayer Computing! We're looking forward to seeing how our customers are going to incorporate cloud computing instances with local disk storage into their existing environments with dedicated servers and cloud computing instances using SAN storage.

If you have questions about whether the SAN or local disk storage option would fit your application best, click the Live Chat icon on SoftLayer.com and consult with one of our sales reps about the benefits and trade-offs of each.

We want you to know exactly what you're getting from SoftLayer, so we try to be as transparent as we can when rolling out new products. If you have any questions about CloudLayer or any of our other offerings, please let us know!

-@nday91

May 25, 2011

"The Cloud" via Tools and Bridges

As Chief Scientist (or Chief Boffin, if you like), I spend a significant amount of time participating in industry, partner and customer events alike. This week is a great example, as I will be speaking at both All About the Cloud and the Citrix Synergy event in San Francisco. I will be covering similar ground on both occasions: the general idea is that the world does not revolve around "the cloud." In fact, "the cloud" tends to be good for certain things and not so good for others. The challenge is that many customers seem to think that cloud is a panacea, solving all of their problems. Often, customers come to us with a blurred idea of why they want cloud, sometimes defaulting to, "The CEO says we need some cloud."

My presentation at the All About the Cloud event is going to focus on the cloud question by trying to understand what each tool does well and so you can deploy accordingly to ensure needs are met. I'll provide a backdrop market growth and then dive into dedicated, virtual and hybrid (cloud + dedicated) solutions with an eye to understanding each solution in broad terms ... As an aside, I wanted to show up with a drill, a nail and a chunk of 2x4 to demonstrate this: I was going to pound the nail into the board with the drill, and then I was told this would be a bad idea. I may yet show up with some tools – all I need is a Home Depot close to the Palace Hotel!

The Citrix presentation is not quite so bold - well, it did not involve props in its initial incarnation. For the Synergy crowd, I'll speak to a few case studies that leverage hybrid solutions to best meet their needs. Specifically, I will discuss companies that have deployed cloud + dedicated, SoftLayer dedicated + someone else's cloud (the horror!) and an enterprise example with a mix of internal data center assets and SoftLayer assets.

The enterprise example is an interesting one and it is timely given what Citrix is up to. Part of the challenge with most enterprise customers is the fact that many have invested significant capital (both dollars and the human variety) in their own infrastructure. This often means that an additional level of complexity is introduced as the enterprise must consider how to bridge the gap between their own infrastructure and another, external (hopefully a SoftLayer) environment.

Citrix is about to launch Cloud Bridge which will help to manage through some of this – the offering enables customers to transparently connect their own data centers with an off premise cloud environment. SoftLayer can make this happen in two ways. Cloud Bridge will sit within Netscaler Platinum offering that we support and customers will have the ability to deploy themselves should they choose to.

I will follow up on this blog with some depth that covers both presentations, as I think this is a conversation worth continuing. In the meantime, I am off to find a Home Depot ...

-@nday91

April 20, 2011

An Innovative Approach to Managed Hosting

One of SoftLayer's driving principles is innovation — Our mantra is 'Innovate or Die.' We don't focus on offering the lowest cost solutions; we strive to offer the most innovative solutions, which in turn brings customers the greatest value.

Take as an example SoftLayer Managed Hosting, a new service we're launching this week.

A quick survey of the market tells us a number of key things about managed hosting in terms of the value proposition offered, as well as the challenges that it can present. The value proposition seems clear: Organizations that need their infrastructure managed and don't have the internal resources to do so can either expand their IT capabilities or look externally to a service provider to take on the work. Many choose the second option because it is much faster and more cost effective than building an internal function. Elimination of infrastructure management responsibilities combined with a lower price would seem to deliver significant value.

So where's the downside?

A typical managed services deal comes with a 3-5 year contract, often accompanied by an early termination fee. The end result: customer lock-in. If the service is not up to snuff, it is difficult to move to another provider.

This is great for the provider, but not so great for the customer. To make matters even less customer-centric, these deals tend to be "all or none" affairs. The service provider wants to add management fees to everything versus just those pieces that the customer wants managed. In addition to that, provisioning time can be horrendous. A managed environment typically takes anywhere from 10-15 business days before the customer can access the environment. That's a painful length of time when you compare it with the five minutes it takes to provision a SoftLayer cloud instance and the 2-4 hours it takes to get a dedicated box online and ready for you.

Understanding the competitive landscape, we decided to take a different approach with our Managed Hosting: The innovative approach.

Instead of a 10-15 day provisioning window, we'll have your managed environment up and running within one (1) business day of ordering.

From a contract perspective, we are confident enough in our service to offer a month-to-month terms. If you don't like the service or if we can't deliver, you should be free to find a provider that meets your needs — no penalties incurred. Isn't it time to expect a provider to earn your business each month? This arrangement also makes managed hosting feasible for short-term needs and applications.

Additionally, SoftLayer Managed Hosting is not "all or none." We'll manage only the pieces of the solution that you want managed.

And to top everything off, it just so happens that we can deliver these solutions at a price point lower than anyone else in the market because of the platform's flexibility.

In this case, innovation brings customers the greatest service value AND the best price!

CBNO

-@nday91

P.S. Neovise prepared a detailed report on our managed hosting offering: A New Breed of Managed Hosting for the Cloud Computing Age. If you like white papers (and who doesn't?), it'll be right up your alley.

March 18, 2011

Parallels on SoftLayer: Webinar Series

We recently started a webinar series with our friends over at Parallels to help customers in different markets understand how they can use SoftLayer and Parallels to power their businesses. A shared hoster, an IT professional and a web designer are going to have different needs and priorities when it comes to infrastructure architecture and control panel management, so our goal for the webinar series is to address some of those differences to help you understand how flexible and powerful the SoftLayer + Parallels combination can be for your business ... regardless of your industry.

If you're a shared hosting provider, and you didn't have an opportunity to join us for our "Parallels on SoftLayer for Shared Hosters" session last week, we have an archived version available for you here:

If you fancy yourself more of a web designer, this week's session might be more interesting for you:

If you don't fall into one of those categories and consider yourself more of a "Jack of all trades" when it comes to IT, this general session might be the best fit for you:

What other markets or industries would you like us to feature? Are these kinds of webinars helpful to you?

-@nday91

November 10, 2010

The Custom-Made Cloud

Not to toot my own horn, but I am an actual Rocket Scientist (well an Aerospace Engineer, but Rocket Scientist sounds way cooler). When you are a Rocket Scientist, most of your time is spent in dealing with facts – universal constants, formulas, and a data set that has been validated countless times over. My role at the CTO at SoftLayer is sometimes a challenge because I have to deal with the terrific hyperbole that the tech world inevitably creates. Consider the Segway, Unified Messaging, etc. I think that cloud computing has also fallen prey.

The cloud promises a lot and it does deliver a lot.

  • Control puts decisions and actions in the hands of the customer. Self-service interfaces enable automated infrastructure provisioning, monitoring, and management. APIs provide even greater automation by supporting integration with other tools and processes, and enabling applications to self-manage.
  • Flexibility provides a broader range of capabilities and choices, enabling the customer to strike ideal balance of capital and operating expenses. In addition, access to additional infrastructure resources happens in minutes rather than week enabling you to respond "on demand" to changes in demand.
  • Flexibility and control combined give administrators more choice. Who manages infrastructure (Internal staff or service provider?) Where are workloads processed in an internal datacenter or in the public cloud? When are workloads processed – is this resource-driven or priority-driven? How much to consume – is this policy-driven or demand-driven? How is IT consumed – via central administration or self-service?

Despite its numerous benefits, the operational and cost effectiveness for many enterprises is challenged by the fact that most cloud services come in limited configurations and only serve as standalone solutions. One cloud does not fit all – Fixed specs do not allow administrators to optimize a cloud environment with the ratio of processing power, memory or storage that its intended application needs for its best performance. Most cloud service providers offer a relatively small number of preconfigured virtual machine images (VMI), often starting with small, medium and large VMIs, each with preset amounts of CPU, RAM and storage. The challenge is that even a few sizes (versus only one) don't fit everybody's needs. For example, applications perform best when they are running on servers with optimized configurations. And every application has unique resource demands. If the server is "too small," performance issues may arise. If the server is "too large," the customer ends up paying for more resources than necessary.

To a degree we have already been doing lots of "cloudy" things given our focus on automation. Combine that with a set of tools that let customers self-provision and I think you see where I am headed. The next step up the value chain is SoftLayer's "Build Your Own Cloud" solution. It delivers all of the benefits that I discussed above, but adds the logical step of handing configuration control to the customer. Customers are able to determine a number of things about the environment that their cloud sits on.

Cloud Computing Options
Cloud Computing Options Part Two (Monthly)

The end result is a cloud environment that is fit for customer purpose and customer cost. A classic win-win situation. I wonder what we will think of next.

-@nday91

June 9, 2010

DNS from All Angles

Serving up content on the internet can be a tricky business. It isn’t just about running a web or app server(s) in an efficient and reliable manner. One of the other critical factors is DNS. You have to understand and optimize how the name the content is advertised under gets translated to the IP address of the content. I don’t want to turn this into a DNS primer, but the two ends of the line of communication are the authoritative DNS server controlled by the domain owner which stores the official translation of the name to the number and the resolving DNS server which acts as a cache and is where the end-user connects to directly. Both ends of the chain have their own idiosyncrasies which can affect how quickly and reliably your content gets delivered.

On the end-user side, I just read an article about how public DNS providers like OpenDNS and Google are breaking the internet. OK, maybe not breaking the internet, but the public DNS providers are confusing CDN location-based algorithms. The article is here: http://www.sajalkayan.com/in-a-cdnd-world-opendns-is-the-enemy.html and I recommend strongly that both content providers and content consumers read it.

The summary is that some CDN algorithms use the ip address (and location) of the DNS server making the request and if that DNS server is nowhere near the end-user on the internet, the end-user will get served content from farther away and will get that content slower than desired. The conclusion is that an end-user should always use a DNS server located as close as possible network-wise, usually that ends up being a DNS server of the network provider.

That is good advice for the end-user, but what about the content provider? If you flip this around and come at DNS from the content provider’s point of view who doesn’t use CDN, you want to make sure that when a DNS request is made, that your authoritative DNS server gets the ip address as fast and reliably as possible back to the end-user.

SoftLayer has built out authoritative DNS farms in all our Datacenters and Network POPs and anycasted the ip addresses for the name servers. What that means is that SoftLayer customers – who get to use our DNS for free – can have their authoritative domain services hosted at all 10 points in North America and through the routing optimization inherent in the internet, the name to number conversion for those domains will happen as close as possible to the end-user and the results will be delivered as quickly as possible.

One very important goal of every content provider is to get the end-user the best experience as possible. Understanding how the internet works from the end-user and well as the server-side is critical. It doesn’t matter how good your content or app is if the end-user has a poor experience.

-@nday91

Subscribe to Author Archive: %