Blurring the Line Between Dedicated and Cloud Service

January 20, 2011

What does "the cloud" mean to you right now? Does it mean "the Internet?" Is it how you think of outsourced IT? Does the nephologist in you immediately think of the large cumulonimubus creeping up the sky from the South? We read about how businesses are adopting cloud-this and cloud-that, but under many definitions we have been using cloud servers for years.

A couple years ago, Kevin wrote a post that gave a little context to the "cloud" terminology confusion:

The Internet is everywhere and the Internet is nowhere.

The fact that we can't point to anything tangible to define the Internet forces us to conceptualize an image that helps us understand how this paradox is possible. A lot of information is sitting around on servers somewhere out there, and when we connect to it, we have access to it all. Cloud, web, dump truck, tubes ... It doesn't matter what we call it because we're not defining the mechanics, we're defining the concepts.

For years, hosting companies have offered compute resources over the Internet for a monthly fee, but as new technologies emerge, it seems we have painted ourselves into a corner with our terminology. For the sake of this discussion, we'll differentiate dedicated servers as single-tenant hardware-dependent servers and cloud servers as multi-tenant hardware-independent servers.

Dedicated servers have some advantages that cloud servers typically haven't had in the past. If you wanted full OS support and control, predictable CPU and disk performance, big Internet pipes, multiple storage options and more powerful networking support, you were in the market for a dedicated server. If your priorities were hourly rates, instant turn-up, image-based provisioning and control via API, cloud servers were probably at the top of your shopping list.

Some competitive advantages of one over the other are fading: SoftLayer has a bare metal product that supports hourly rates for dedicated resources, and we can reliably turn up dedicated servers in under 2 hours. If you select a ready-made box, you might have it up and running in under 30 minutes. Our development team has also built a great API that allows unparalleled control for our dedicated servers.

On the flip-side, our cloud servers are supported just like our dedicated servers: You get the same great network, the ability to connect with other cloud and dedicated instances via private network, and predictable CPU usage with virtual machines pinned to a specific number of CPU cores.

Soon enough, deltas between dedicated performance and cloud functionality will be virtually eliminated and we'll all be able to adopt a unified understanding of what this "cloud" thing is, but until then, we'll do our best to express the competitive advantages of each platform so you can incorporate the right solutions for your needs into your infrastructure.

Engage ...

-Duke

Comments

January 20th, 2011 at 10:03am

At Web-JIVE, we have been considering moving from our SL dedicated iron to a SL private cloud based infrastructure due to the added security of being attached to a SAN running more than our RAID1. Since we run cPanel/Centos, our big question revolves around disk expansion more than anything.

The one question I have yet to be able to get a straight answer on is, say we purchase a private cloud instance with say 250gb of disk storage and all of a sudden, we need more disk capacity. How do we add, say another 250gb of disk that to the current private cloud instance without having separate /home dirs or having to move to another instance?

Would this be accomplished by having SL setup the instance with the Centos LVM or does that matter?

Regards,
Eric Caldwell - CEO
Web-JIVE.com

January 20th, 2011 at 2:51pm

Thanks for the question, Eric!

The short answer to the question of upgrading disk capacity of a cloud instance is that it can't be done without moving to another instance ... Once it's locked into place, it's effectively a fixed-maximum hard drive.

As I understand it, an LVM can be used on a secondary "disk" in the cloud to enable additional disk space (up to a total of four secondary disks). I don't think we typically manage this kind of administration, but it's relatively straightforward to set up.

Does that help?

January 31st, 2011 at 10:41am

We continue to use and advise using a hybrid approach of mixing dedicated and cloud resources until the cloud framework matures.

We have found the cloud to fill certain roles very nicely and will continue to expand our footprint with those services.

Here's some bad and good we've experience so far.

The Bad:

While stability has improved tremendously, some issues continue to hamper wholesale switch to the cloud:
- Private network maintenance/issues
- SL maintenance
- Lack of Transparency
- Critical Failures

Though it is decreasing, private network maintenance pulls your cloud instances offline. This increases downtime compared to a dedicated server. Also, consider that this work is not under your control but at SL's discretion.

Once again decreasing, but we've noticed maintenance impacting cloud instances more frequently than dedicated resources.

In more than one case, we have had cloud instances fail to respond to any control methods via the portal. To resolve the situation, we have to open a ticket with SL to get it resolved. The underlying cause is not always shared or known making it difficult for us to assure clients that issues are not recurring.

We have have 2 total failures where multiple cloud instances have failed due to SAN or network problems. The ability to isolated instances into different groups so they would not be impacted by the same point of failure would be very helpful.

The Good:

Besides the obvious, less costs, rapid deployment, etc. Here are some other ways we are really leveraging the solution:

- Development systems
- DOS/Security Mitigation
- Rapid Recover
- Diagnostics

Instead of maintaining various build boxes internally, we now have cloud templates for various development tasks. When we need them, we spin them up, build our tools, shut them down. Using a template assures a standardized build environment.

We have put up reverse proxy/filtering systems in case of an attack. Very quick and easy, especially when using portable IPs.

As with the DDoS, if you use portable IPs, you can often bring up a new cloud instance using your templates much faster than SL can fix some outage.

We use them for diagnostic purposes. If a box is having some odd issue, we can easily clone it and diagnose without impacting production operations.

I am hoping that over the year the bad will continue to decline and that we will find many more good uses.

February 4th, 2011 at 1:22pm

The 'delta' that matter in the dedicated vs. cloud discussion for most of us is cost. In terms of Softlayers production and add-on offerings, the dedicated servers still offer the best value for the dollar.

The cloud servers in a perfect world even if the reliability, and performance was there still incur the same high costs as a dedicated server for less of a 'machine'. You can tout scaling but there isn't a cloud system out that that can get a more than a bare bones cloud server up quick than you can turn up a phsyical server, or better yet turn up your own VM.

The potential for cloud servers is there, but until price & features/reliability/performance meet, I'm afraid it's more marketing hype than a truly viable solution.

Leave a Reply

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • You can enable syntax highlighting of source code with the following tags: <pre>, <blockcode>, <bash>, <c>, <cpp>, <drupal5>, <drupal6>, <java>, <javascript>, <php>, <python>, <ruby>. The supported tag styles are: <foo>, [foo].
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
By submitting this form, you accept the Mollom privacy policy.

Comments

January 20th, 2011 at 10:03am

At Web-JIVE, we have been considering moving from our SL dedicated iron to a SL private cloud based infrastructure due to the added security of being attached to a SAN running more than our RAID1. Since we run cPanel/Centos, our big question revolves around disk expansion more than anything.

The one question I have yet to be able to get a straight answer on is, say we purchase a private cloud instance with say 250gb of disk storage and all of a sudden, we need more disk capacity. How do we add, say another 250gb of disk that to the current private cloud instance without having separate /home dirs or having to move to another instance?

Would this be accomplished by having SL setup the instance with the Centos LVM or does that matter?

Regards,
Eric Caldwell - CEO
Web-JIVE.com

January 20th, 2011 at 2:51pm

Thanks for the question, Eric!

The short answer to the question of upgrading disk capacity of a cloud instance is that it can't be done without moving to another instance ... Once it's locked into place, it's effectively a fixed-maximum hard drive.

As I understand it, an LVM can be used on a secondary "disk" in the cloud to enable additional disk space (up to a total of four secondary disks). I don't think we typically manage this kind of administration, but it's relatively straightforward to set up.

Does that help?

January 31st, 2011 at 10:41am

We continue to use and advise using a hybrid approach of mixing dedicated and cloud resources until the cloud framework matures.

We have found the cloud to fill certain roles very nicely and will continue to expand our footprint with those services.

Here's some bad and good we've experience so far.

The Bad:

While stability has improved tremendously, some issues continue to hamper wholesale switch to the cloud:
- Private network maintenance/issues
- SL maintenance
- Lack of Transparency
- Critical Failures

Though it is decreasing, private network maintenance pulls your cloud instances offline. This increases downtime compared to a dedicated server. Also, consider that this work is not under your control but at SL's discretion.

Once again decreasing, but we've noticed maintenance impacting cloud instances more frequently than dedicated resources.

In more than one case, we have had cloud instances fail to respond to any control methods via the portal. To resolve the situation, we have to open a ticket with SL to get it resolved. The underlying cause is not always shared or known making it difficult for us to assure clients that issues are not recurring.

We have have 2 total failures where multiple cloud instances have failed due to SAN or network problems. The ability to isolated instances into different groups so they would not be impacted by the same point of failure would be very helpful.

The Good:

Besides the obvious, less costs, rapid deployment, etc. Here are some other ways we are really leveraging the solution:

- Development systems
- DOS/Security Mitigation
- Rapid Recover
- Diagnostics

Instead of maintaining various build boxes internally, we now have cloud templates for various development tasks. When we need them, we spin them up, build our tools, shut them down. Using a template assures a standardized build environment.

We have put up reverse proxy/filtering systems in case of an attack. Very quick and easy, especially when using portable IPs.

As with the DDoS, if you use portable IPs, you can often bring up a new cloud instance using your templates much faster than SL can fix some outage.

We use them for diagnostic purposes. If a box is having some odd issue, we can easily clone it and diagnose without impacting production operations.

I am hoping that over the year the bad will continue to decline and that we will find many more good uses.

February 4th, 2011 at 1:22pm

The 'delta' that matter in the dedicated vs. cloud discussion for most of us is cost. In terms of Softlayers production and add-on offerings, the dedicated servers still offer the best value for the dollar.

The cloud servers in a perfect world even if the reliability, and performance was there still incur the same high costs as a dedicated server for less of a 'machine'. You can tout scaling but there isn't a cloud system out that that can get a more than a bare bones cloud server up quick than you can turn up a phsyical server, or better yet turn up your own VM.

The potential for cloud servers is there, but until price & features/reliability/performance meet, I'm afraid it's more marketing hype than a truly viable solution.

Leave a Reply

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • You can enable syntax highlighting of source code with the following tags: <pre>, <blockcode>, <bash>, <c>, <cpp>, <drupal5>, <drupal6>, <java>, <javascript>, <php>, <python>, <ruby>. The supported tag styles are: <foo>, [foo].
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
By submitting this form, you accept the Mollom privacy policy.