Posts Tagged 'Technology'

April 20, 2012

Choosing a Cloud: Cost v. Technology v. Hosting Provider

If you had to order a new cloud server right now, how would choose it?

I've worked in the hosting industry for the better part of a decade, and I can safely say that I've either observed or been a part of the buying decision for a few thousand hosting customers — from small business owners getting a website online for the first time to established platforms that are now getting tens of millions of visits every day. While each of those purchasers had different requirements and priorities, I've noticed a few key deciding factors that are consistent in a all of those decisions:

The Hosting Decision

How much will the dedicated server or cloud computing instance cost? What configuration/technology do I need (or want)? Which hosting provider should I trust with my business?

Every website administrator of every site on the Internet has had to answer those three questions, and while they seem pretty straightforward, they end up overlapping, and the buying decision starts to get a little more complicated:

The Hosting Decision

The natural assumption is that everyone will choose a dedicated server or cloud computing instance that falls in the "sweet spot" where the three circles overlap, right? While that makes sense on paper, hosting decisions are not made in a vacuum, so you'll actually see completely valid hosting decisions targeting every spot on that graph.

Why would anyone choose an option that wouldn't fit in the sweet spot?

That's a great question, and it's a tough one to answer in broad strokes. Let's break the chart down into a few distinct zones to look at why a user would choose a server in each area:

The Hosting Decision

Zone 1

Buyers choosing a server in Zone 1 are easiest to understand: Their budget takes priority over everything else. They might want to host with a specific provider or have a certain kind of hardware, but their budget doesn't allow for either. Maybe they don't need their site to use the latest and greatest hardware or have it hosted anywhere in particular. Either way, they choose a cloud solely based on whether it fits their budget. After the initial buying decision, if another server needs to be ordered, they might become a Zone 4 buyer.

Zone 2

Just like Zone 1 buyers, Zone 2 buyers are a pretty simple bunch as well. If you're an IT administrator at a huge enterprise that does all of your hosting in-house, your buying decision is more or less made for you. It doesn't matter how much the solution costs, you have to choose an option in your data center, and while you might like a certain technology, you're going to get what's available. Enterprise users aren't the only people deciding to order a server in Zone 2, though ... It's where you see a lot of loyal customers who have the ability to move to another provider but prefer not to — whether it's because they want their next server to be in the same place as their current servers, they value the capabilities of a specific hosting provider (or they just like the witty, interesting blogs that hosting provider writes).

Zone 3

As with Zone 1 and Zone 2, when a zone doesn't have any overlapping areas, the explanation is pretty easy. In Zone 3, the buying decision is being made with a priority on technology. Buyers in this area don't care what it costs or where it's hosted ... They need the fastest, most powerful, most scalable infrastructure on the market. Similar to Zone 1 buyers, once Zone 3 buyers make their initial buying decision, they might shift to Zone 5 for their next server or cloud instance, but we'll get to that in a minute.

Zone 4

Now we're starting to overlap. In Zone 4, a customer will be loyal to a hosting provider as long as that loyalty doesn't take them out of their budget. This is a relatively common customer ... They'll try to compare options apples-to-apples, and they'll make their decision based on which hosting provider they like/trust most. As we mentioned above, if a Zone 1 buyer is adding another server to their initial server order, they'll likely look to add to their environment in one place to make it easier to manage and to get the best performance between the two servers.

Zone 5

Just like the transitional Zone 1 buyers, when Zone 3 buyers look to build on their environment, they'll probably become Zone 5 buyers. When your initial buying decision is based entirely on technology, it's unusual to reinvent the wheel when it comes to your next buying decision. While there are customers that will reevaluate their environment and choose a Zone 3 option irrespective of where their current infrastructure is hosted, it's less common. Zone 5 users love having he latest and greatest technology, and they value being able to manage it through one provider.

Zone 6

A Zone 6 buyer is usually a Zone 1 buyer that has specific technology needs. With all the options on the table, a Zone 6 buyer will choose the cloud environment that provides the latest technology or best performance for their budget, regardless of the hosting provider. As with Zone 1 and Zone 3 buyers, a Zone 6 buyer will probably become a Zone 7 buyer if they need to order another server.

Zone 7

Zone 7 buyers are in the sweet spot. They know the technology they want, they know the price they want to pay, and they know the host they want to use. They're able to value all three of their priorities equally, and they can choose an environment that meets all of their needs. After Zone 6 buyers order their first server(s), they're going to probably become Zone 7 buyers when it comes time for them to place their next order.

As you probably noticed, a lot of transitioning happens between an initial buying decision and a follow-up buying decision, so let's look at that quickly:

The Hosting Decision

Regardless of how you make your initial buying decision, when it's time for your next server or cloud computing instance, you have a new factor to take into account: You already have a cloud infrastructure at a hosting provider, so when it comes time to grow, you'll probably want to grow in the same place. Why? Moving between providers can be a pain, managing environments between several providers is more difficult, and if your servers have to work together, they're generally doing so across the public Internet, so you're not getting the best performance.

Where does SoftLayer fit in all of this? Well beyond being a hosting provider that buyers are choosing, we have to understand buyers are making their buying decisions, and we have to position our business to appeal to the right people with the right priorities. It's impossible to be all things for all people, so we have to choose where to invest our attention ... I'll leave that post for another day, though.

If you had to choose a zone that best describes how you made (or are currently making) your buying decision, which one would it be?

-@khazard

April 18, 2012

Dome9: Tech Partner Spotlight

This guest blog comes to us from Dave Meizlik, Dome9 VP of marketing and business development. Dome9 is a featured member of the SoftLayer Technology Partners Marketplace. With Dome9, you get secure, on-demand access to all your servers by automating and centralizing firewall management and making your servers virtually invisible to hackers.

Three Tips to Securing Your Cloud Servers

By now everyone knows that security is the number one concern among cloud adopters. But lesser known is why and what to do to mitigate some of the security risks ... I hope to shed a little light on those points in this blog post, so let's get to it.

One of the greatest threats to cloud servers is unsecured access. Administrators leave ports (like RDP and SSH) open so they can connect to and manage their machines ... After all, they can't just walk down the hall to gain access to them like with an on-premise network. The trouble with this practice is that it leaves these and other service ports open to attack from hackers who need only guess the credentials or exploit a vulnerability in the application or OS. Many admins don't think about this because for years they've had a hardened perimeter around their data center. In the cloud, however, the perimeter collapses down to each individual server, and so too must your security.

Tip #1: Close Service Ports by Default

Instead of leaving ports — from SSH to phpMyAdmin — open and vulnerable to attack, close them by default and open them only when, for whom, and as long as is needed. You can do this manually — just be careful not to lock yourself out of your server — or you can automate the process with Dome9 for free.

Dome9 provides a patent-pending technology called Secure Access Leasing, which enables you to open a port on your server with just one click from within Dome9 Central, our SaaS management console, or as an extension in your browser. With just one click, you get time-based secure access and the ability to empower a third party (e.g., a developer) with access easily and securely.

When your service ports are closed by default, your server is virtually invisible to hackers because the server will not respond to an attacker's port scans or exploits.

Tip #2: Make Your Security as Elastic as Your Cloud

Another key security challenge to cloud security is management. In a traditional enterprise you have a semi-defined perimeter with a firewall and a strong, front-line defense. In the cloud, however, that perimeter collapses down to the individual server and is therefore multiplied by the number of servers you have in your environment. Thus, the number of perimeters and policies you have to manage increases exponentially, adding complexity and cost. Remember, if you can't manage it, you can't secure it.

As you re-architect your infrastructure, take the opportunity to re-architect your security, keeping in mind that you need to be able to scale instantaneously without adding management overhead. To do so, create group-based policies for similar types of services, with role-based controls for users that need access to your cloud servers.

With Dome9, for example, you can create an unlimited number of security groups — umbrella policies applied to one or more servers and for which you can create user-based self-service access. So, for example, you can set one policy for your web servers and another for your SQL database servers, then you can enable your web developers to self-grant access to the web servers while the DBAs have access to the database servers. Neither, however, may be able to access the others' servers, but you — the super admin — can. Any new servers you add on-the-fly as you scale up your infrastructure are automatically paired with your Dome9 account and attached to the relevant security group, so your security is truly elastic.

Tip #3: Make Security Your Responsibility

The last key security challenge is understanding who's responsible for securing your cloud. It's here that there's a lot of debate and folks get confused. According to a recent Ponemon Institute study, IT pros point fingers equally at the cloud provider and cloud user.

When everyone is responsible, no one is responsible. It's best to pick up the reigns and be your best champion. Great cloud and hosted providers like SoftLayer are going to provide an abundance of controls — some their own, and some from great security providers such as Dome9 (shameless, I know) — but how you them is up to you.

I liken this to a car: Whoever made your car built it with safety in mind, adding seat belts and air bags and lots of other safeguards to protect you. But if you go speeding down the freeway at 140 MPH without a seatbelt on, you're asking for trouble. When you apply this concept to the cloud, I think it helps us better define where to draw the lines.

At the end of the day, consider all your options and how you can use the tools available to most effectively secure your cloud servers. It's going to be different for just about everyone, since your needs and use cases are all different. But tools like Dome9 let you self-manage your security at the host layer and allow you to apply security controls for how you use a cloud platform (i.e., helping you be a safe driver).

Security is a huge topic, and I didn't even scratch the surface here, but I hope you've learned a few things about how to secure your cloud servers. If the prospect of scaling out security policies across your infrastructure isn't particularly appealing, I invite you to try out Dome9 (for free) to see how easily you can manage automated cloud security on your SoftLayer server. It's quick, easy, and (it's worth repeating a few times...) free:

  1. Create a Dome9 account at https://secure.dome9.com/Account/Register?code=SoftLayer
  2. Add the Dome9 agent to your SoftLayer server
  3. Configure your policy in Dome9 Central, our SaaS management console

SoftLayer customers that sign up for Dome9 enjoy all the capabilities of Dome9 free for 30 days. After that trial period, you can opt to use either our free Lite Cloud, which provides security for an unlimited number of servers, or our Business Cloud for automated cloud security.

-Dave Meizlik, Dome9

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
April 17, 2012

High Performance Computing for Everyone

This guest blog was submitted by Sumit Gupta, senior director of NVIDIA's Tesla High Performance Computing business.

The demand for greater levels of computational performance remains insatiable in the high performance computing (HPC) and technical computing industries, as researchers, geophysicists, biochemists, and financial quants continue to seek out and solve the world's most challenging computational problems.

However, access to high-powered HPC systems has been a constant problem. Researchers must compete for supercomputing time at popular open labs like Oak Ridge National Labs in Tennessee. And, small and medium-size businesses, even large companies, cannot afford to constantly build out larger computing infrastructures for their engineers.

Imagine the new discoveries that could happen if every researcher had access to an HPC system. Imagine how dramatically the quality and durability of products would improve if every engineer could simulate product designs 20, 50 or 100 more times.

This is where NVIDIA and SoftLayer come in. Together, we are bringing accessible and affordable HPC computing to a much broader universe of researchers, engineers and software developers from around the world.

GPUs: Accelerating Research

High-performance NVIDIA Tesla GPUs (graphics processing units) are quickly becoming the go-to solution for HPC users because of their ability to accelerate all types of commercial and scientific applications.

From the Beijing to Silicon Valley — and just about everywhere in between — GPUs are enabling breakthroughs and discoveries in biology, chemistry, genomics, geophysics, data analytics, finance, and many other fields. They are also driving computationally intensive applications, like data mining and numerical analysis, to much higher levels of performance — as much as 100x faster.

The GPU's "secret sauce" is its unique ability to provide power-efficient HPC performance while working in conjunction with a system's CPU. With this "hybrid architecture" approach, each processor is free to do what it does best: GPUs accelerate the parallel research application work, while CPUs process the sequential work.

The result is an often dramatic increase in application performance.

SoftLayer: Affordable, On-demand HPC for the Masses

Now, we're coupling GPUs with easy, real-time access to computing resources that don't break the bank. SoftLayer has created exactly that with a new GPU-accelerated hosted HPC solution. The service uses the same technology that powers some of the world's fastest HPC systems, including dual-processor Intel E5-2600 (Sandy Bridge) based servers with one or two NVIDIA Tesla M2090 GPUs:

NVIDIA Tesla

SoftLayer also offers an on-demand, consumption-based billing model that allows users to access HPC resources when and how they need to. And, because SoftLayer is managing the systems, users can keep their own IT costs in check.

You can get more system details and pricing information here: SoftLayer HPC Servers

I'm thrilled that we are able to bring the value of hybrid HPC computing to larger numbers of users. And, I can't wait to see the amazing engineering and scientific advances they'll achieve.

-Sumit Gupta, NVIDIA - Tesla

February 28, 2012

14 Questions Every Business Should Ask About Backups

Unfortunately, having "book knowledge" (or in this case "blog knowledge") about backups and applying that knowledge faithfully and regularly are not necessarily one and the same. Regardless of how many times you hear it or read it, if you aren't actively protecting your data, YOU SHOULD BE.

Here are a few questions to help you determine whether your data is endangered:

  1. Is your data backed up?
  2. How often is your data backed up?
  3. How often do you test your backups?
  4. Is your data backed up externally from your server?
  5. Are your backups in another data center?
  6. Are your backups in another city?
  7. Are your backups stored with a different provider?
  8. Do you have local backups?
  9. Are your backups backed up?
  10. How many people in your organization know where your backups are and how to restore them?
  11. What's the greatest amount of data you might lose in the event of a server crash before your next backup?
  12. What is the business impact of that data being lost?
  13. If your server were to crash and the hard drives were unrecoverable, how long would it take you to restore all of your data?
  14. What is the business impact of your data being lost or inaccessible for the length of time you answered in the last question?

We can all agree that the idea of backups and data protection is a great one, but when it comes to investing in that idea, some folks change their tune. While each of the above questions has a "good" answer when it comes to keeping your data safe, your business might not need "good" answers to all of them for your data to be backed up sufficiently. You should understand the value of your data to your business and invest in its protection accordingly.

For example, a million-dollar business running on a single server will probably value its backups more highly than a hobbyist with a blog she contributes to once every year and a half. The million-dollar business needs more "good" answers than the hobbyist, so the business should invest more in the protection of its data than the hobbyist.

If you haven't taken time to quantify the business impact of losing your primary data (questions 11-14), sit down with a pencil and paper and take time to thoughtfully answer those questions for your business. Are any of those answers surprising to you? Do they make you want to reevaluate your approach to backups or your investment in protecting your data?

The funny thing about backups is that you don't need them until you NEED them, and when you NEED them, you'll usually want to kick yourself if you don't have them.

Don't end up kicking yourself.

-@khazard

P.S. SoftLayer has a ton of amazing backup solutions but in the interested of making this post accessible and sharable, I won't go crazy linking to them throughout the post. The latest product release that got me thinking about this topic was the SoftLayer Object Storage launch, and if you're concerned about your answers to any of the above questions, object storage may be an economical way to easily get some more "good" answers.

February 24, 2012

Kontagent: Tech Partner Spotlight

This is a guest blog featuring Kontagent, one of this month's addition to the SoftLayer Technology Partners Marketplace. Kontagent's kSuite Analytics Platform is a leading enterprise analytics solution for social and mobile application developers. Its powerful dashboard and data science expertise provide organization-wide insights into how customers interact within applications and how to act on that data. Below the video, you'll see an excerpt from a very interesting interview they facilitated with Gaia Online's CEO with fantastic insight into mobile app metrics.

Important Mobile App Metrics to Track

At Kontagent, we've helped hundreds of social customers win by helping them gain better insights into their users' behaviors. We're always improving our already-powerful, best-in-class analytics platform, and we've been leveraging our knowledge and experience to help many of our social customers make a successful transition into the mobile space, too.

Whether you're in the early stages of developing a mobile application, or you've already launched it and have a substantial user base, looking to social app developers for a history lesson on how to do it right can give you a huge head-start, and greater chance at success.

Gaia Online has "done it right" with Monster Galaxy — a hit on both Facebook and iOS. In the first installment of our Kontagent Konnect Executive Interview Series, we spoke with CEO Mike Sego on how the company is applying many of the lessons it learned in moving social-to-mobile, including:

  • The metrics that are most important to succeeding on mobile
  • How to monetize on the F2P model
  • How to successfully split-test on iOS (yes, it is possible!)
  • Other tactics used to keep players engaged and coming back for more

Q: What are the overarching fundamentals for developers who want to make the social to mobile transition? Do these fundamentals also apply to mobile developers in general?
A: Applying the knowledge you gained on Facebook to developing for mobile is the most effective way we've found to succeed in the mobile space.

When it comes to content, the mechanics are almost identical for what motivates user engagement, retention, and monetization between mobile and social. Appointment mechanics, energy mechanics, leaving players wanting more, designing specific goals that are just out of reach until multiple play sessions, etc.—the user experience is consistent.

When it comes to social and mobile game apps, we have found that free-to-play models are the most successful at attracting users. Beyond that, you should focus on a very tight conversion funnel; once a new user has installed your application, analyze every action she takes through the levels or stages of your app. When you start looking at cohorts of users, if there is a spike in drop-offs, you should start asking yourself, 'What is it about this particular stage that could be turning off users? Did I make the level too difficult? Was it not difficult enough? What are some other incentives I can bake into this particular point of the app to get them to keep going?'

But, as you continue to develop your application, keep in mind that you should develop and release quickly, and test often. The trick is to test, fine-tune and iterate with user data. These insights will help you to improve conversion. Spending a disproportionate amount of time instrumenting and scrutinizing the new user experience will pay dividends down the line. This is true for both social and mobile games.

Q: What are the metrics you pay most attention to?
Just as it was in social, the two biggest levers in mobile are still minimizing customer acquisition costs (CAC), and maximizing lifetime value (LTV). The question boils down to this: How can we acquire as many users as possible, for as little money as possible? And, how can we generate as much revenue as possible from those users? Everything else is an input into those two major metrics because those two metrics are what will ultimately determine if you have a scalable hit or a game that just won't pay for itself.

User retention over a longer period of time
Specifically, look at how many users stick around, and how long they stick around, i.e., Day 1, Day 7 retention. (Day 1 retention alone is too broad for you to fully understand what needs to be improved. That's the reason for testing the new user experience.)

Cost to acquire customers
We look at the organic ratio—the number of users who come to us without us having paid for them. This is different from the way we track virality in social since our data for user source isn't as detailed… continued

The full interview goes on a bit longer, and it has profound responses topics we alluded to earlier in the post. We don't want to over-stay our generous welcome here on the SoftLayer blog, so if social and mobile application development are of interest to you, register here (for free) to learn more from the complete interview.

-Catherine Mylinh, Kontagent

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
February 16, 2012

Cloudant: Tech Partner Spotlight

This is a guest blog from our featured Technology Partners Marketplace company, Cloudant. Cloudant enables you to build next-generation data-driven applications without having to worry about developing, managing, and scaling your data layer.

Company Website: https://cloudant.com/
Tech Partners Marketplace: http://www.softlayer.com/marketplace/cloudant

Cloudant: Data Layer for the Big Data Era

The recipe for big data app success: Start small. Iterate fast. Grow to epic proportions.

Unfortunately, most developers' databases come up short when they try to simultaneously "iterate fast" and "grow to epic proportions" — those two steps are most often at odds. I know ... I've been there. In a recent past life, I attacked petabyte-per-second data problems as a particle physicist at the Large Hadron Collider together with my colleagues and Cloudant co-founders, Alan Hoffman and Adam Kocoloski. Here are some lessons we learned the hard way:

  1. Scaling a database yourself is brutally hard (both application level sharding and the master-slave model). It is harder with SQL than it is with NoSQL databases, but either way, the "scale it yourself" approach is loaded with unknowns, complications and operational expense.
  2. Horizontal scaling on commodity hardware is a must. We got very good at this and ended up embedding Apache CouchDB behind a horizontal scaling framework to scale arbitrarily and stay running 24x7 with a minimal operational load.
  3. The data layer must scale. It should be something that applications grow into, not out of.

That last point inspired Alan, Adam and me to co-found Cloudant.

What is Cloudant?
Cloudant is a scalable data layer (as a service) for Big Data apps. Built on CouchDB, JSON, and MapReduce, it lets developers focus on new features instead of the drudgery of growing or migrating databases. The Cloudant Data Layer is already big: It collects, stores, analyzes and distributes application data across a global network of secure, high-performance data centers, delivering low-latency and non-stop data access to users no matter where they're located. You get to focus on your code; we've got data scalability and availability covered for you.

Scaling Your App on Cloudant
Cloudant is designed to support fast app iteration by developers. It's based on the CouchDB NoSQL database where data is encapsulated and transferred as JSON documents. You don't need to design and redesign SQL data models or migrate databases in order to create new app features. You don't need to write object-relational mapping code either. The database resides behind an HTTP layer and provides a rich permission model, so you can access, secure and share your data via a RESTful API.

Your app is a tenant within a multi-tenant data layer that is already big and scalable. You get a URL end point for your data layer, get data in and out of it via HTTP, and we scale and secure it around the globe. Global data distribution and intelligent routing minimizes latency between your users and the data, which can add 100s of milliseconds per request (we've measured!). Additionally, Cloudant has an advanced system for prioritizing requests so that apps aren't affected by 'noisy neighbors' in a multi-tenant system. We also offer a single-tenant data layer to companies who want it — your very own white-labeled data cloud. As your data volume and IO requests rise (or fall), Cloudant scales automatically, and because your data is replicated to multiple locations, it's always available. Start small and grow to epic proportions? Check.

Other Data Management Gymnastics
The Cloudant Data Layer also makes it easy to add advanced functionality to your apps:

  • Replicate data (all of it or sub-sets) to data centers, computers or even mobile devices for local processing (great for analytics) or off-line access (great for mobile users). Re-synching is automatic.
  • Perform advanced analytics with built-in MapReduce and full-text indexing and search.
  • Distribute your code with data — Cloudant can distribute and serve any kind of document, even HTML5 and other browser-based code, which makes it easy to scale your app and move processing from your back-end to the browser.

Why We Run on SoftLayer
Given the nature of our service, people always ask us where we have our infrastructure, and we're quick to tell them we chose SoftLayer because we're fanatical about performance. We measured latencies for different data centers run by other cloud providers, and it's no contest: SoftLayer provides the lowest and most predictable latencies. Data centers that are thousands of miles apart perform almost as if they are on the same local area network. SoftLayer's rapidly expanding global presence allows Cloudant to replicate data globally throughout North America, Europe and Asia (with plans to continue that expansion as quickly as SoftLayer can build new facilities).

The other major draw to SoftLayer was the transparency they provide about our infrastructure. If you run a data layer, IO matters! SoftLayer provisions dedicated hardware for us (rather than just virtual machines), and they actually tell us exactly what hardware we are running on, so we can tweak our systems to get the most bang for our buck.

Get Started with Cloudant for Free
If you're interested to see what the Cloudant Data Layer could do for your app, sign up at cloudant.com to get your FREE global data presence created in an instant.

-Michael Miller, Cloudant

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
February 1, 2012

Flex Images: Blur the Line Between Cloud and Dedicated

Our customers are not concerned with technology for technology's sake. Information technology should serve a purpose; it should function as an integral means to a desired end. Understandably, our customers are focused, first and foremost, on their application architecture and infrastructure. They want, and need, the freedom and flexibility to design their applications to their specifications.

Many companies leverage the cloud to take advantage of core features that enable robust, agile architectures. Elasticity (ability to quickly increase or decrease compute capacity) and flexibility (choice such as cores, memory and storage) combine to provide solutions that scale to meet the demands of modern applications.

Another widely used feature of cloud computing is image-based provisioning. Rapid provisioning of cloud resources is accomplished, in part, through the use of images. Imaging capability extends beyond the use of base images, allowing users to create customized images that preserve their software installs and configurations. The images persist in an image library, allowing users to launch new cloud instances based their images.

But why should images only be applicable to virtualized cloud resources?

Toward that end, we're excited to introduce SoftLayer Flex Images, a new capability that allows us to capture images of physical and virtual servers, store them all in one library, and rapidly deploy those images on either platform.

SoftLayer Flex Images

Physical servers now share the core features of virtual servers—elasticity and flexibility. With Flex Images, you can move seamlessly between and environments as your needs change.

Let's say you're running into resource limits in a cloud server environment—your data-intensive server is I/O bound—and you want to move the instance to a more powerful dedicated server. Using Flex Images, you can create an image of your cloud server and, extending our I/O bound example, deploy it to a custom dedicated server with SSD drives.

Conversely, a dedicated environment can be quickly replicated on multiple cloud instances if you want the scaling capability of the cloud to meet increased demand. Maybe your web heads run on dedicated servers, but you're starting to see periods of usage that stress your servers. Create a Flex Image from your dedicated server and use it to deploy cloud instances to meet demand.

Flex Image technology blurs the distinctions—and breaks down the walls—between virtual and physical computing environments.

We don't think of Flex Images as new product. Instead—like our network, our portal, our automated platform, and our globe-spanning geographic diversity—Flex Image capability is a free resource for our customers (with the exception of standard nominal costs in storing the Flex Images).

We think Flex Images represents not only great value, but also provides a further example of how SoftLayer innovates continually to bring new capabilities and the highest possible level of customer control to our automated services platform.

To sum up, here are some of the key features and benefits of SoftLayer Flex Images:

  • Universal images that can be used interchangeably on dedicated or cloud systems
  • Unified image library for archiving, managing, sharing, and publishing images
  • Greater flexibility and higher scalability
  • Rapid provisioning of new dedicated and cloud environments
  • Available via SoftLayer's management portal and API

In public beta, Flex Images are available now. We invite you to try them out, and, as always, we want to hear what you think.

-Marc

January 19, 2012

IPv6 Milestone: "World IPv6 Launch Day"

On Tuesday, the Internet Society announced "World IPv6 Launch Day", a huge step in the transition from IPv4 to IPv6. Scheduled for June 6, 2012, this "launch day" comes almost one year after the similarly noteworthy World IPv6 Day, during which many prominent Internet businesses enabled IPv6 AAAA record resolution for their primary websites for a 24-hour period.

With IPv6 Day serving as a "test run," we confirmed a lot of what we know about IPv6 compatibility and interoperability with deployed systems throughout the Internet, and we even learned about a few areas that needed a little additional attention. Access troubles for end-users was measured in fractions of a percentage, and while some sites left IPv6 running, many of them ended up disabling the AAAA IPv6 records at the end of the event, resuming their legacy IPv4-only configuration.

We're past the "testing" phase now. Many of the IPv6-related issues observed in desktop operating systems (think: your PCs, phones, and tablets) and consumer network equipment (think: your home router) have been resolved. In response – and in an effort to kick IPv6 deployment in the butt – the same businesses which ran the 24-hour field test last year have committed to turning on IPv6 for their content and keeping it on as of 6/6/2012.

But that's not all, folks!

In the past, IPv6 availability would have simply impacted customers connecting to the Internet from a few universities, international providers and smaller technology-forward ISPs. What's great about this event is that a significant number of major broadband ISPs (think: your home and business Internet connection) have committed to enabling IPv6 to their subscribers. June 6, 2012, marks a day where at least 1% of the participating ISPs' downstream customers will be receiving IPv6 addresses.

While 1% may not seem all that impressive at first, in order to survive the change, these ISPs must slowly roll out IPv6 availability to ensure that they can handle the potential volume of resulting customer support issues. There will be new training and technical challenges that I suspect all of these ISPs will face, and this type of approach is a good way to ensure success. Again, we must appreciate that the ISPs are turning it on for good now.

What does this mean for SoftLayer customers? Well the good news is that our network is already IPv6-enabled ... In fact, it has been so for a few years now. Those of you who have taken advantage of running a dual-stack of IPv4 and IPv6 addresses may have noticed surprisingly low IPv6 traffic volume. When 6/6/2012 comes around, you should see that volume rise (and continue to rise consistently from there). For those of you without IPv6 addresses, now's the time to get started and get your feet wet. You need to be prepared for the day when new "eyeballs" are coming online with IPv6-only addresses. If you don't know where to start, go back through this article and click on a few of the hyperlinks, and if you want more information, ARIN has a great informational IPv6 wiki that has been enjoying community input for a couple years now.

The long term benefit of this June 6th milestone is that with some of the "big guys" playing in this space, the visibility of IPv6 should improve. This will help motivate the "little guys" who otherwise couldn't get motivated – or more often couldn't justify the budgetary requirements – to start implementing IPv6 throughout their organizations. The Internet is growing rapidly, and as our collective attentions are focused on how current legislation (SOPA/PIPA) could impede that growth, we should be intentional about fortifying the Internet's underlying architecture.

-Dani

January 18, 2012

Keep Fighting: SOPA on the Ropes. PIPA Lurking.

The Internet is unnervingly quiet today. In response to the Stop Online Piracy Act (SOPA) in the House of Representatives and the Protect IP Act (PIPA) in the Senate, some of the most popular sites on the web have gone dark today – demonstrating the danger (and the potential unchecked power) of these two bills.

Late Friday afternoon, Judiciary Committee Chairman Lamar Smith announced that the DNS-blocking provisions would be removed from SOPA, and on Saturday, The White House responded to in opposition to the the bills as they stand today. Shortly thereafter, SOPA was "shelved."

The Internet was abuzz ... but the Champagne wasn't getting popped yet. After digging into the details, it was revealed that SOPA being "shelved" just meant that it is being temporarily put to sleep. Judiciary Committee Chairman Lamar Smith stood explained:

"To enact legislation that protects consumers, businesses and jobs from foreign thieves who steal America's intellectual property, we will continue to bring together industry representatives and Members to find ways to combat online piracy.

Due to the Republican and Democratic retreats taking place over the next two weeks, markup of the Stop Online Piracy Act is expected to resume in February."

I only mention this because it's important not to forget that SOPA isn't dead, and it's still very dangerous. If you visit sites like reddit, Wikipedia, Mozilla and Boing Boing today (January 18, 2012), you experience the potential impact of the legislation.

The Internet's outrage against SOPA has brought about real change in our nation's capital: The House is reconsidering the bill, and they'll hopefully dismiss it. With our collective momentum, we need to look at the PROTECT IP Act (PIPA, or Senate Bill 968) – a similar bill with similarly harmful implications that's been sneaking around in SOPA's shadow.

As it is defined today, PIPA has a stated goal of providing the US Government and copyright holders an additional arsenal of tools to aide in taking down 'rogue websites dedicated to infringing or counterfeit goods.' The Senate bill details that an "information location tool shall take technically feasible and reasonable measures, as expeditiously as possible, to remove or disable access to the Internet site associated with the domain name set forth in the order." In addition, it must delete all hyperlinks to the offending "Internet site."

Our opposition to PIPA is nearly identical to our opposition to SOPA. Both require a form of essentially breaking a core aspect of how the Internet functions – whether that breakage happens in DNS (as detailed in my last blog post) or in the required rearchitecture of the way any site that accepts user-generated content has to respond to PIPA-related complaints.

PIPA is scheduled for Senate vote on January 24, 2012. It is important that you voice your opinion with your government representatives and let them know about your opposition to both SOPA and PIPA. We want to help you get started down that path. Find your local representatives' contact information:

[SOPA Concerns]: Contact your congressperson in the U.S. House of Representatives
[PIPA Concerns]: Contact your Senator in the U.S. Senate

Keep spreading the word, and make sure your voice is heard.

-@toddmitchell

January 12, 2012

How the Internet Works (And How SOPA Would Break It)

Last week, I explained SoftLayer's stance against SOPA and mentioned that SOPA would essentially require service providers like SoftLayer to "break the Internet" in response to reports of "infringing sites." The technical readers in our audience probably acknowledged the point and moved on, but our non-technical readers (and some representatives in Congress) might have gotten a little confused by the references to DNS, domains and IP addresses.

Given how pervasive the Internet is in our daily lives, you shouldn't need to be "a techie" to understand the basics of what makes the Internet work ... And given the significance of the SOPA legislation, you should understand where the bill would "break" the process. Let's take a high level look at how the Internet works, and from there, we can contrast how it would work if SOPA were to pass.

The Internet: How Sites Are Delivered

  1. You access a device connected in some way to the Internet. This device can be a cell phone, a computer or even a refrigerator. You are connected to the Internet through an Internet Service Provider (ISP) which recognizes that you will be accessing various sites and services hosted remotely. Your ISP manages a network connected to the other networks around the globe ("inter" "network" ... "Internet").
  2. You enter a domain name or click a URL (for this example, we'll use http://www.softlayer.com since we're biased to that site).

Internet Basics

  1. Your ISP will see that you want to access "www.softlayer.com" and will immediately try to find someone/something that knows what "www.softlayer.com" means ... This search is known as an NS (name server) lookup. In this case, it will find that "www.softlayer.com" is associated with several name servers.

Internet Basics

  1. The first of these four name servers to respond with additional information about "softlayer.com" will be used. Domains are typically required to be associated with two or three name servers to ensure if one is unreachable, requests for that domain name can be processed by another.
  2. The name server has Domain Name System (DNS) information that maps "www.softlayer.com" to an Internet Protocol (IP) address. When a domain name is purchased and provisioned, the owner will associate that domain name with an authoritative DNS name server, and a DNS record will be created with that name server linking the domain to a specific IP address. Think of DNS as a phone book that translates a name into a phone number for you.

Internet Basics

  1. When the IP address you reach sees that you requested "www.softlayer.com," it will find the files/content associated with that request. Multiple domains can be hosted on the same IP address, just as multiple people can live at the same street address and answer the phone. Each IP address only exists in a single place at a given time. (There are some complex network tricks that can negate that statement, but in the interest of simplicity, we'll ignore them.)
  2. When the requested content is located (and generated by other servers if necessary), it is returned to your browser. Depending on what content you are accessing, the response from the server can be very simple or very complex. In some cases, the request will return a single HTML document. In other cases, the content you access may require additional information from other servers (database servers, storage servers, etc.) before the request can be completely fulfilled. In this case, we get HTML code in return.

Internet Basics

  1. Your browser takes that code and translates the formatting and content to be displayed on your screen. Often, formatting and styling of pages will be generated from a Cascading Style Sheet (CSS) referenced in the HTML code. The purpose of the style sheet is to streamline a given page's code and consolidate the formatting to be used and referenced by multiple pages of a given website.

Internet Basics

  1. The HTML code will reference sources for media that may be hosted on other servers, so the browser will perform the necessary additional requests to get all of the media the website is trying to show. In this case, the most noticeable image that will get pulled is the SoftLayer logo from this location: http://static2.softlayer.com/images/layout/logo.jpg

Internet Basics

  1. When the HTML is rendered and the media is loaded, your browser will probably note that it is "Done," and you will have successfully navigated to SoftLayer's homepage.

If SOPA were to pass, the process would look like this:

The Internet: Post-SOPA

  1. You access a device connected in some way to the Internet.
  2. You enter a domain name or click a URL (for this example, we'll use http://www.softlayer.com since we're biased to that site).

*The Change*

  1. Before your ISP runs an NS lookup, it would have to determine whether the site you're trying to access has been reported as an "infringing site." If http://www.softlayer.com was reported (either legitimately or illegitimately) as an infringing site, your ISP would not process your request, and you'd proceed to an error page. If your ISP can't find any reference to the domain an infringing site, it would start looking for the name server to deliver the IP address.
  2. SOPA would also enforce filtering from all authoritative DNS provider. If an ISP sends a request for an infringing site to the name server for that site, the provider of that name server would be forced to prevent the IP address from being returned.
  3. One additional method of screening domains would happen at the level of the operator of the domain's gTLD. gTLDs (generic top-level domains) are the ".____" at the end of the domain (.com, .net, .biz, etc.). Each gTLD is managed by a large registry organization, and a gTLD's operator would be required to prevent an infringing site's domain from functioning properly.
  4. If the gTLD registry operator, your ISP and the domain's authoritative name server provider agree that the site you're accessing has not been reported as an infringing site, the process would resume the pre-SOPA process.

*Back to the Pre-SOPA Process*

  1. The domain's name server responds.
  2. The domain's IP address is returned.
  3. The IP address is reached to get the content for http://www.softlayer.com.
  4. HTML is returned.
  5. Your browser translates the HTML into a visual format.
  6. External file references from the HTML are returned.
  7. The site is loaded.

The proponents of SOPA are basically saying, "It's difficult for us to keep up with and shut down all of the instances of counterfeiting and copyright infringement online, but it would be much easier to target the larger sites/providers 'enabling' users to access that (possible) infringement." Right now, the DMCA process requires a formal copyright complaint to be filed for every instance of infringement, and the providers who are hosting the content on their network are responsible for having that content removed. That's what our abuse team does full-time. It's a relatively complex process, but it's a process that guarantees us the ability to investigate claims for legitimacy and to hear from our customers (who hear from their customers) in response to the claims.

SOPA does not allow for due process to investigate concerns. If a site is reported to be an infringing site, service providers have to do everything in their power to prevent users from getting there.

-@toddmitchell

Subscribe to technology