sales

January 15, 2016

Vuukle: Helping Publishers Manage Comments and Match Readers with Content

I recently had a conversation with Ravi Mittal, the founder of a company called Vuukle. Vuukle is based in New Delhi and has just graduated from our Catalyst startup program.

Vuukle actually started out in Silicon Valley—Ravi launched his first product iteration with the goal of trying to source public opinion on the Web. Key to his initial offering was a proprietary algorithm he developed to sort comments in order of credibility—a highly valuable aspect of the product, but something he quickly learnt wasn’t enough value to encompass a product.

Through experiments with Vuukle’s early customers (including the Santa Clara Weekly), a major problem emerged which appeared to pervade the online publishing industry: reader engagement wasn’t sticky enough to compel them to post (and reply to) comments. In order to solve this meta-problem, Vuukle pivoted into a new type of comment publishing system, which helps publishers see engagement through custom analytics.

The major problem Vuukle faces is not unique to just the publishers they service. It’s a pretty large scale global problem, extending beyond news publishers and into all content-based publishing online—so you can imagine how much competition is out there around the globe in this space. When I asked Ravi how he differentiates Vuukle from recently dominant players like Livefyre and Disqus, he offered, "Most customers aren’t using those other services; they have their own commenting systems. If anything, we were pitted against Facebook commenting. In the few cases where Disqus is being used, we’ve seen problems with load times, throttling limits and so on."

In order to set Vuukle in a class of its own, Ravi and his team—which is globally dispersed, with people in Egypt, the Ukraine, U.S.A., and India—have architected an infrastructure for super-fast load times that work at amazing scale, employing SoftLayer servers in our Singapore and India data centers, as well as working with a third party, ScaleDB, to handle database queries and traffic. Of course, that alone doesn’t give them a unique value proposition; Vuukle truly sets itself apart by dropping publisher costs upfront to a minimal platform access fee and offering a 50/50 revenue share model. Vuukle not only is set up to handle high traffic websites with commenting, but it also promotes user engagement with comments by integrating with actual publishing systems. Vuukle passes traffic between posts and offers editors insights into how readers are commenting, in addition to creating a new revenue stream through comments—from which it sources the majority of its own income.

Interestingly, Ravi’s move from the Valley to India came because of family reasons and ended up being a blessing to the business. Early after his move, he realized that there was a ton of opportunity for Vuukle with the major Indian newspapers that had cobbled together their own infrastructure to power websites. Just a couple years in, Vuukle is powering comments on The Hindu, Deccan Chronicle, and Indian Express, three of the most highly trafficked news websites in the country. To help global adoption amongst all sorts of publishers, Vuukle also offers a free WordPress plugin.

Vuukle seems to have gained traction through Ravi’s hard work chasing customers at home, and he’s proud to be finding success despite being bootstrapped. When questioned about the local startup scene, Ravi said, “Nothing much is unique in the Indian startup ecosystem. [It's] kind of like a gold rush in India, where founders are hunting for investment before they have a clear market path and products that are market-ready. A lot of copycat businesses [are] launching that are focused on Indian markets (taking models from the States and elsewhere.) Not many patents are being filed in India—not much actual innovation, indicative of a proliferation of large seed round raises (around $1 million) and a lot of startups spend funding on staff they don’t need.”

The future seems bright for Vuukle. Its growth beyond India’s borders will happen soon and will be financed through revenue rather than venture capital rounds, of which Ravi seems quite wary. Now that Vuukle has graduated from Catalyst, I was keen to hear whether the company would still keep the majority of their infrastructure with IBM—it turns out prospective Vuukle customers love hearing that their infrastructure is hosted on our cloud and that a core aspect of Vuukle’s value proposition is the scale and reliability we offer their solution.

I really think this company is an exciting one to watch. I look forward to seeing greater success for Vuukle as they grow with our ever-expanding footprint of data centers in the Asian region and globally.

-Qasim

Based in Toronto, Qasim Virjee manages the Catalyst Startup Program in Canada and can be reached on twitter (@qasim) or via his personal website.

January 12, 2016

The SLayer Standard Vol. 2, No. 1

The week in review. All the IBM Cloud and SoftLayer headlines in one place.

AT&T’s data comes to IBM.
IBM and AT&T announced an expansion of their current partnership. According to the press release, “AT&T will transition its managed application and managed hosting services unit to IBM. IBM will then align these managed service capabilities with the IBM Cloud portfolio.” Philip Guido, IBM General Manager of Global Technology Services for North America, said, "Working with AT&T, we will deliver a robust set of IBM Cloud and managed services that can continuously evolve to meet clients' business objectives."

When the deal closes, managed applications and managed hosting services AT&T offers will be delivered by IBM. “AT&T will continue to provide networking services including security, cloud networking, and mobility that it provides today. And the two companies will work closely to innovate and deliver a full suite of integrated solutions to customers.”

Read the rest of the details in the official press release.

Welcome to Munich, Watson IoT.
The Watson IoT business unit is getting a new home. Last week, IBM announced the “launch of a new global headquarters” in Munich. The new home base “will be the centerpiece of a group of eight global regional customer centers that suggest IBM plans to win major IoT business by deemphasizing its American roots.” Building trust with European companies is a vital part of this new office. Frank Gillett, a Forrester analyst said, “A traditional mainline tech company has plunked down in Europe to say, we are firmly with you, we are rooting ourselves in your environment to work with you.”

Gillett also said with IBM’s announcement “signaled the most strongly of any of the vendors when it comes to investment and organizational structure and headquarters. Now they have to execute and deliver.”

Get more information about the new office here.

Watson is the rise of the thinking machine.
IBM Watson VP, Steve Gold, sat down with Forbes to talk about where Watson is headed in 2016.

With the announcement of several new partnerships, IBM plans to put Watson’s cognitive capabilities to use solving a wide array of issues worldwide. Gold said, “At the start of 2014 we had three partners, and today we have over 300.” The article notes, “Watson is already in operation across 26 industries, including financial services, travel and retail in 36 countries, and its uptake is continuing to accelerate.”

The partnerships with Twitter, Softbank, and Mubadala, just to name a few, will further develop Watson’s cognitive growth. That’s because “cognitive computers don’t need to be programmed—they can learn for themselves.”

Get the full article here.

-Rachel

Categories: 
January 8, 2016

A guide to Direct Link connectivity

So you’ve got your infrastructure running on SoftLayer, but you find yourself wishing for a more direct way to connect your on-premises or co-located infrastructure to your SoftLayer cloud infrastructure—with higher bandwidth and lower latency. And you also think the Internet just isn’t good enough when we’re talking VPN tunnels and private networking connectivity. Does that sound like you?

What are my options?

SoftLayer offers three Direct Link products that are specifically for customers looking for the most efficient connection to their SoftLayer private network. A Direct Link enables you to connect to the SoftLayer private network backbone with low latency speeds—up to 10Gbps using fiber cross-connect patches directly into the SoftLayer private network. A Direct Link is used to connect to a SoftLayer private network within the same geographical location of the physical cross-connect. (An add-on is available that enables you to connect to any of your SoftLayer private networks on a global scale.)

Direct Link Network Service Provider


The Direct Link NSP option allows you to create a cross-connect using single-mode fiber from one of our PoP locations onto the SoftLayer private backbone. You’ll have a Network Service Provider of your own preference that provides you with connectivity from your on-prem location to the SoftLayer PoP. This could be an “in-facility” cross-connect to your own equipment, MPLS, Metro WAN, or Fiber provider. The Direct Link NSP is the top-tier connectivity option we offer pertaining to private networking connectivity onto the SoftLayer private backbone.

Direct Link Cloud Exchange Provider


A cloud exchange provider is a carrier/network provider that is already connected to SoftLayer using multi-tenant, high capacity links. This allows you to purchase a virtual circuit at this provider and a Direct Link cloud exchange link at SoftLayer at reduced costs, because the physical connectivity from SoftLayer to the cloud exchange provider is already in place and shared amongst other customers.

Direct Link Colocation Provider


If your gear is co-located in a cabinet purchased via SoftLayer that’s in the same facility near or adjacent to a SoftLayer data center or POD, this option would work for you. Similar to the NSP option, this is a single-mode fiber but there’s no need to connect to a SoftLayer PoP location first—you can connect directly from your cabinet to the relevant SoftLayer data center.

How do you communicate over a Direct Link?

The SoftLayer Direct Link service is a routed Layer 3 service. Routing options are: routing using a SoftLayer assigned subnet, NAT, GRE or IPsec tunnels, VRF, and BGP.

Routing
We directly bind the 172.x.x.x IP block to your remote hosts that need to communicate with your SoftLayer infrastructure. You can either renumber your existing hosts on the remote networks or bind these as secondary IPs and setup appropriate static routes on the host. You can then use the 172.x.x.x IP space to communicate with the 10.x.x.x IP's of your SoftLayer hosts as necessary. Routing via BGP is optional.

NAT
With NAT, SoftLayer will assign you a block of IPs from the 172.16.0.0/12 IP block to NAT into a device from your remote network to prevent IP conflicts with the SoftLayer 10.x.x.x IP range(s) assigned.

GRE / IPsec Tunneling
You can create a GRE or IPSEC tunnel between the remote network and your infrastructure here at SoftLayer. This allows you to use whatever IP space you want on the SoftLayer side and route back across the tunnel to the remote network. With that being said, this is a configuration that will have to be managed and supported by you, independent of SoftLayer. Furthermore, this configuration could break connectivity to the SoftLayer services network if you use a 10.x.x.x block that SoftLayer has in use for services. This solution will also require that each host needing connectivity to the SoftLayer services network and the remote network have two IPs assigned (one from the SL 10.x.x.x block, and one from the remote network block) and static routes setup on the host to ensure traffic is routed appropriately. You will not be able to assign whatever IP space you want directly on the SoftLayer hosts (BYOIP) and have it routable on the SoftLayer network inherently. The only way to do this is as outlined above and is not supported by SoftLayer.

VRF
You can opt-in to utilizing a VRF (Virtual Routing and Forwarding) instance. This allows the customer to either utilize their own remote IP addresses or overlap with a large majority of the SoftLayer infrastructure; however, you must be aware that if you utilize the 10.x.x.x network you still cannot overlap with your hosts within SoftLayer nor within the SoftLayer services network (10.0.0.0/14 and 10.200.0.0/14). You will not be able to set any of the following for your remote prefixes: 10.0.0.0/14, 10.200.0.0/14, 10.198.0.0/15, 169.254.0.0/16, 224.0.0.0/4, and any IP ranges assigned to your VLANs on the SoftLayer platform. When choosing the VRF option, the ability to use SoftLayer VPN services for management of your servers will no longer be possible. Routing via BGP is optional.

Example:

FAQ

Will I need to provide my own cross-connect?
Yes, you will need to order your own cross-connect at your data center of choice—to be connected to the SoftLayer switch port described in the LOA (Letter of Authorization) provided.

What kind of cross-connects are supported?
We strictly use Single Mode Fiber (SMF). We do not accept MMF or Copper.

What is the default size of the remote 172.16.*.* subnet assigned?
Unless otherwise requested, Direct Link customers will be assigned a /24 (256 IPs) subnet.

Which IP block has been reserved for SoftLayer servers on the backend?
We've allocated the entire 10.0.0.0/8 block for use on the SL private network. Specifically, 10.0.0.0/14 has been ear-marked for services. Here’s the full list of service subnets: http://knowledgelayer.softlayer.com/faqs/196#154

Which IP block has been reserved for point-to-point SoftLayer XCR to customer router?
10.254.0.0/16 range. We normally allocate either a /30 or /31 subnet for the point-to-point connection (between our XCR and their equipment on the other end of the Direct Link).

Does Direct Link support jumbo frames?
Yes, just like the private SoftLayer network Direct Link can support up to MTU (Maximum Transmission Unit) 9000-size jumbo frames.

Pricing and locations

A list of available locations and pricing can be found at www.softlayer.com/direct-link.

-Mathijs Dubbe

January 6, 2016

Do You Speak SoftLayer Object Storage?

So you’ve made the decision to utilize object storage at SoftLayer. Great! But are you and your applications fluent in object storage? Do you know how to transfer data to SoftLayer object storage as well as modify and delete objects? How about when to use APIs and when to use storage gateways? If not, you’re not alone.

We’ve found that most IT professionals understand the difference between “traditional” (i.e., file and block) storage and object storage. They have difficulty, however, navigating the methods to interact with SoftLayer’s object storage service that is based on OpenStack Swift. This is understandable because traditional storage systems expose volumes and or shares that can be mounted and consumed via iSCSI, NFS, or SMB protocols.

That’s not the case with object storage, including the object storage service offered by SoftLayer. Data is only accessed via the use of REST APIs and language bindings, third-party applications supporting SFTP, the SoftLayer customer portal, or via storage gateways.

The solutions are outlined below, including guidance on when to utilize each access method. Figure 1 provides a high level overview of the available options and their purpose.



Figure 1: Object storage data access methods

REST APIs and Language Bindings
The first and possibly most flexible method to access SoftLayer object storage is via REST APIs and language bindings. These APIs and bindings give you the ability to interact with SoftLayer object storage via command line or programmatically. As a result, you can create scripts to perform a file upload, download certain objects, and modify metadata related to the object. Additionally, the current support for PHP, Java, Ruby, and Python bindings give application developers the flexibility to support SoftLayer object storage in their applications.

While this method is flexible in terms of capabilities, it does assume the user has knowledge and experience writing scripts, programs, and applications. REST APIs and language bindings aren’t the best methods for IT organizations that want to integrate existing environment backup, archive, and disaster recovery solutions. These solutions typically require traditional storage mount points, which REST APIs and language bindings don’t provide.

Third-Party Applications
The second method is to use third-party applications that support SFTP. This method abstracts the use of REST APIs and gives users the ability to upload, download, and delete objects via a GUI. However, you won’t have the ability to modify metadata when using an SFTP client. Additionally, third-party applications have a 5GB upload limit placed on each object by SoftLayer and OpenStack Swift. If an object greater than 5GB needs to be uploaded, you have to follow the OpenStack method of creating large objects on object storage to assure successful and efficient object upload. Unless you’re comfortable with this methodology, it’s strongly recommended that you use either the REST APIs or storage gateway solutions to access files over 5GB.

SoftLayer Customer Portal
The third method to access SoftLayer object storage is to simply use the SoftLayer customer portal. By using the portal, you have the ability to add containers, add files to containers, delete files from containers, modify metadata, and enable CDN capabilities. As with the SFTP method of accessing the object store, you can upload an unlimited number of files as long as each file does not exceed 20MB in size. Also, there is no bulk upload option within the customer portal; users must select and upload on a per-file basis. While using the portal is simple, it does provide some limitations and is best for users only wanting to upload a few files that occupy 20MB or less.

Storage Gateways
The last method to access and utilize SoftLayer object storage is storage gateways. Unlike other methods, storage gateways are unique. They’re able to expose traditional storage protocols like iSCSI, NFS, CIFS, and SMB and translate the read/write/modify commands into REST API calls against the object storage service. As a result, these devices offer an easier path to consume SoftLayer object storage for businesses looking to integrate their on-premises environment with the cloud. Some storage gateways also have the ability to compress, deduplicate, and encrypt data in-flight and at-rest. Storage gateways work best with organizations looking to integrate existing applications requiring traditional storage access methods (like backup software) with object storage or to securely transfer and store data to cloud object storage.

Summary
While there are many methods to access SoftLayer object storage, it’s important that you select an option that best meets your requirements relating to data access, security, and integration. For example, if you’re writing an application that requires object storage, you would most likely choose to interact with object storage via REST APIs or use language bindings. Or, if you simply need to integrate existing applications in your environment to cloud object storage, storage gateway would be the best option. In all cases, make sure you can meet your requirements with the appropriate method.

Table 1 lists sample requirements and shows whether each option meets the requirements. Use it to help you with your decision making process:



Table 1: Decision making tool

Click here for more information about SoftLayer’s object storage service and click here for FAQs on object storage.

Click here for information about SoftLayer’s REST-APIs and language bindings.

-Daniel De Araujo & Naeem Altaf

Categories: 
December 30, 2015

Using Ansible on SoftLayer to Streamline Deployments

Many companies today are leveraging new tools to automate deployments and handle configuration management. Ansible is a great tool that offers flexibility when creating and managing your environments.

SoftLayer has components built within the Ansible codebase, which means continued support for new features as the Ansible project expands. You can conveniently pull your SoftLayer inventory and work with your chosen virtual servers using the Core Ansible Library along with the SoftLayer Inventory Module. Within your inventory list, your virtual servers are grouped by various traits, such as “all virtual servers with 32GB of RAM,” or “all virtual servers with a domain name of softlayer.com.” The inventory list provides different categorized groups that can be expanded upon. With the latest updates to the SoftLayer Inventory Module, you can now get a list of virtual servers by tags, as well as work with private virtual servers. You can then use each of the categories provided by the inventory list within your playbooks.

So, how can you work with the new categories (such as tags) if you don’t yet have any inventory or a deployed infrastructure within SoftLayer? You can use the new SoftLayer module that’s been added to the Ansible Extras Project. This module provides the ability to provision virtual servers within a playbook. All you have to do is supply the build detail information for your virtual server(s) within your playbook and go.

Let’s look at an example playbook. You’ll want to specify a hostname along with a domain name when defining the parameters for your virtual server(s). The hostname can have an incremental number appended at the end of it if you’re provisioning more than one virtual server; e.g., Hostname-1, Hostname-2, and so on. You just need to specify a value True for the parameter increment. Incremental naming offers the ability to uniquely name virtual servers within your playbook, but is also optional in the case where you want similar hostnames. Notice that you can also specify tags for your virtual servers, which is handy when working with your inventory in future playbooks.

Following is a sample playbook for building Ubuntu virtual servers on SoftLayer:

---
- name: Build Tomcat Servers
  hosts: localhost
  gather_facts: False
  tasks:
  - name: Build Servers
    local_action:
      module: softlayer
      quantity: 2
      increment: True
      hostname: www
      domain: test.com
      datacenter: mex01
      tag: tomcat-test
      hourly: True
      private: False
      dedicated: False
      local_disk: True
      cpus: 1
      memory: 1024
      disks: [25]
      os_code: UBUNTU_LATEST
      ssh_keys: [12345]

By default, your playbook will pause until each of your virtual servers completes provisioning before moving onto the next plays within your playbook. You can specify the wait parameter to False if you choose not to wait for the virtual servers to complete provisioning. The wait parameter is helpful for when you want to build many virtual servers, but some have different characteristics such as RAM or tag naming. You can also set the maximum time you want to wait on the virtual servers by setting the wait_timeout parameter, which takes an integer defining the number of seconds to wait.

Once you’re finished using your virtual servers, canceling them is as easy as creating them. Just specify a new playbook step with a state of absent, as well as specifying the virtual server ID or tags to know which virtual servers to cancel.

The following example will cancel all virtual servers on the account with a tag of tomcat-test:

- name: Cancel Servers
  hosts: localhost
  gather_facts: False
  tasks:
  - name: Cancel by tag
    local_action:
      module: softlayer
      state: absent
      tag: tomcat-test

New features are being developed with the core inventory library to bring additional functionality to Ansible on SoftLayer. These new developments can be found by following the Core Ansible Project hosted on Github. You can also follow the Ansible Extras Project for updates to the SoftLayer module.

As of this blog post, the new SoftLayer module is still pending inclusion into the Ansible Extras Project. Click here to check out the current pull request for the latest code and samples.

-Matt

December 28, 2015

Semantics: "Public," "Private," and "Hybrid" in Cloud Computing, Part II

Welcome back! In the second post in this two-part series, we’ll look at the third definition of “public” and “private,” and we’ll have that broader discussion about “hybrid”—and we’ll figure out where we go after the dust has cleared on the semantics. If you missed the first part of our series, take a moment to get up to speed here before you dive in.

Definition 3—Control: Bare Metal v. Virtual

A third school of thought in the “public v. private” conversation is actually an extension of Definition 2, but with an important distinction. In order for infrastructure to be “private,” no one else (not even the infrastructure provider) can have access to a given hardware node.

In Definition 2, a hardware node provisioned for single-tenancy would be considered private. That single-tenant environment could provide customers with control of the server at the bare metal level—or it could provide control at the operating system level on top of a provider-managed hypervisor. In Definition 3, the latter example would not be considered “private” because the infrastructure provider has some level of control over the server in the form of the virtualization hypervisor.

Under Definition 3, infrastructure provisioned with full control over bare metal hardware is “private,” while any provider-virtualized or shared environment would be considered “public.” With complete, uninterrupted control down to the bare metal, a user can monitor all access and activity on the infrastructure and secure it from any third-party usage.

Defining “public cloud” and “private cloud” using the bare metal versus virtual delineation is easy. If a user orders infrastructure resources from a provider, and those resources are delivered from a shared, virtualized environment, that infrastructure would be considered public cloud. If the user orders a number of bare metal servers and chooses to install and maintain his or her own virtualization layer across those bare metal servers, that environment would be a private cloud.

“Hybrid”

Mix and Match

Now that we see the different meanings “public” and “private” can have in cloud computing, the idea of a “hybrid” environment is a lot less confusing. In actuality, it really only has one definition: A hybrid environment is a combination of any variation of public and private infrastructure.

Using bare metal servers for your database and virtual servers for your Web tier? That’s a hybrid approach. Using your own data centers for some of your applications and scaling out into another provider’s data centers when needed? That’s hybrid, too. As soon as you start using multiple types of infrastructure, by definition, you’ve created a hybrid environment.

And Throw in the Kitchen Sink

Taking our simple definition of “hybrid” one step further, we find a few other variations of that term’s usage. Because the cloud stack is made up of several levels of services—Infrastructure as a Service, Platform as a Service, Software as a Service, Business Process as a Service—“hybrid” may be defined by incorporating various “aaS” offerings into a single environment.

Perhaps you need bare metal infrastructure to build an off-prem private cloud at the IaaS level—and you also want to incorporate a managed analytics service at the BPaaS level. Or maybe you want to keep all of your production data on-prem and do your sandbox development in a PaaS environment like Bluemix. At the end of the day, what you’re really doing is leveraging a “hybrid” model.

Where do we go from here?

Once we can agree that this underlying semantic problem exists, we should be able to start having better conversations:

  • Them: We’re considering a hybrid approach to hosting our next application.
  • You: Oh yeah? What platforms or tools are we going to use in that approach?
  • Them: We want to try and incorporate public and private cloud infrastructure.
  • You: That’s interesting. I know that there are a few different definitions of public and private when it comes to infrastructure…which do you mean?
  • Them: That’s a profound observation! Since we have our own data centers, we consider the infrastructure there to be our private cloud, and we’re going to use bare metal servers from SoftLayer as our public cloud.
  • You: Brilliant! Especially the fact that we’re using SoftLayer.

Your mileage may vary, but that’s the kind of discussion we can get behind.

And if your conversation partner balks at either of your questions, send them over to this blog post series.

-@khazard

December 21, 2015

Introducing API release notes and examples library

The website to find out what new and exciting changes are happening on the SoftLayer platform is now softlayer.github.io. Specifically, this website highlights any changes to the customer portal, the API, and any supporting systems. Please continue to rely on tickets created on your account for information regarding any upcoming maintenances and other service impacting events.

At SoftLayer, we follow agile development principles and release code in small but frequent iterations—usually about two every week. The changes featured in release notes on softlayer.github.io only cover what is publicly accessible. So while they may seem small, there are usually a greater number of behind-the-scenes changes happening.

Along with the release notes are a growing collection of useful example scripts on how to actually use the API in a variety of popular languages. While the number of examples is currently small, we are constantly adding examples as they come up, so keep checking back. We are generally inspired to add examples by the questions posted on Stack Overflow that have the SoftLayer tag, so keep posting your questions there, too.

-Chris

December 18, 2015

Semantics: "Public, "Private," and "Hybrid" in Cloud Computing, Part I

What does the word “gift” mean to you? In English, it most often refers to a present or something given voluntarily. In German, it has a completely different meaning: “poison.” If a box marked “gift” is placed in front of an English-speaker, it’s safe to assume that he or she would interact with it very differently than a German-speaker would.

In the same way, simple words like “public,” “private,” and “hybrid” in cloud computing can mean very different things to different audiences. But unlike our “gift” example above (which would normally have some language or cultural context), it’s much more difficult for cloud computing audiences to decipher meaning when terms like “public cloud,” “private cloud,” and “hybrid cloud” are used.

We, as an industry, need to focus on semantics.

In this two-part series, we’ll look at three different definitions of “public” and “private” to set the stage for a broader discussion about “hybrid.”

“Public” v. “Private”

Definition 1—Location: On-premises v. Off-premises

For some audiences (and the enterprise market), whether an infrastructure is public or private is largely a question of location. Does a business own and maintain the data centers, servers, and networking gear it uses for its IT needs, or does the business use gear that’s owned and maintained by another party?

This definition of “public v. private” makes sense for an audience that happens to own and operate its own data centers. If a business has exclusive physical access to and ownership of its gear, the business considers that gear “private.” If another provider handles the physical access and ownership of the gear, the business considers that gear “public.”

We can extend this definition a step further to understand what this audience would consider to be a “private cloud.” Using this definition of “private,” a private cloud is an environment with an abstracted “cloud” management layer (a la OpenStack or CloudStack or VMWare) that runs in a company’s own data center. In contrast, this audience would consider a “public cloud” to be a similar environment that’s owned and maintained by another provider.

Enterprises are often more likely to use this definition because they’re often the only ones that can afford to build and run their own data centers. They use “public” and “private” to distinguish between their own facilities or outside facilities. This definition does not make sense for businesses that don’t have their own data center facilities.

Definition 2—Population: Single-tenant v. Multi-tenant

Businesses that don’t own their own data center facilities would not use Definition 1 to distinguish “public” and “private” infrastructure. If the infrastructure they use is wholly owned and physically maintained by another provider, these businesses are most interested in whether hardware resources are shared with any other customers: Do any other customers have data on or access to a given server’s hardware? If so, the infrastructure is public. If not, the infrastructure is private.

Using this definition, public and private infrastructure could be served from the same third-party-owned data center, and the infrastructure could even be in the same server rack. “Public” infrastructure just happens to provide multiple users with resources and access to a single hardware node. Note: Even though the hardware node is shared, each user can only access his or her own data and allotted resources.

On the flip side, if a user has exclusive access to a hardware node, a business using Definition 2 would consider the node to be private.

Using this definition of “public” and “private,” multiple users share resources at the server level in a “public cloud” environment—and only one user has access to resources at the server level in a “private cloud” environment. Depending on the environment configuration, a “private cloud” user may or may not have full control over the individual servers he or she is using.

This definition echoes back to Definition 1, but it is more granular. Businesses using Definition 2 believe that infrastructure is public or private based on single-tenancy or multi-tenancy at the hardware level, whereas businesses using Definition 1 consider infrastructure to be public or private based on whether the data center itself is single-tenant or multi-tenant.

Have we blown your minds yet? Stay tuned for Part II, where we’ll tackle bare metal servers, virtual servers, and control. We’ll also show you how clear hybrid environments really are, and we’ll figure out where the heck we go from here now that we’ve figured it all out.

-@khazard

December 17, 2015

Xen Hypervisor Maintenance - December 2015

Security of your assets on our cloud platform is very important to the SoftLayer team. Last week, our Security Operations Center – which provides real time monitoring of suspicious activity (including being part of multiple security pre-disclosure lists) – alerted our engineering team to a potential vulnerability (advisory CVE-2015-8555 / XSA-165) in the Xen Hypervisor that if left un-remediated could allow a malicious user to access data from another VSI guest sharing the same hardware node and hypervisor instance.

Upon learning of this vulnerability, SoftLayer issued a notification including a per-data center schedule for applying critical maintenance to remediate the vulnerability. Our schedule was performed over multiple days and on a POD-by-POD basis with individual VM instances being offline for minutes while they rebooted. The updates were completed successfully in all data centers in advance of the public announcement of this vulnerability.

While deployment techniques such as clustering and failover across data centers and PODs allows continuous operations during a planned or unplanned event, you should be aware that SoftLayer is committed to working aggressively to further reduce the impact of events on your deployment and operations teams.

We value your business and will continue to take actions that insure your environment is secure and efficient to operate. If you have any questions or concerns, don't hesitate to reach out to SoftLayer support or your direct SoftLayer contacts.

-Sonny

December 14, 2015

The SLayer Standard Vol. 1, No. 23

The week in review. All the IBM Cloud and SoftLayer headlines in one place.

Grocery store chain comes to SoftLayer.
We are excited to have Giant Eagle moving to our infrastructure. So why are they moving away from building their data centers? Jeremy Gill, Giant Eagle’s senior director of technology infrastructure, said, “The firm's focus has shifted to infrastructure-as-a-service for its future computing needs as an answer to the geographic spread of its users. It chose IBM over other providers because it offered both virtual servers and bare-metal servers on which Giant Eagle could run some of its legacy applications.”

Giant Eagle plans to transition their secondary data center used for disaster recovery to SoftLayer over the next 12 months. Gill also noted that moving to the cloud will help to develop their current disaster recovery system. In doing so, they’ll be “adding additional resiliency.” In an article by InformationWeek said, “The disaster recovery system, instead of being asleep in storage, will be represented by a virtual machine, running at idle, but ready to receive data and be scaled out.” Gill further noted, “The goal is to get the recovery time objective down from one or several hours to 15 minutes or less (possibly even instant recovery).”

Get more details here.

IBM Cloud leaves competitors in the dust.
The results of a recent independent study, Amazon.com and Microsoft are a step behind IBM’s cloud offering.

The independent research firm’s goal was to “measure the performance and relative cost of the cloud industry's biggest players. The objective of the study was two-fold: one, determine which of the cloud kings offered the most operations per second. Second, compare the relative cost for each operation performed. Not only did IBM's SoftLayer bare metal platform win the day -- it turns out it wasn't even close.”

So why is it a big deal? If you look at it based solely on performance, the study found IBM is far and above its competitors. The survey said, “For each dollar spent on IBM's SoftLayer bare metal cloud platform, its customers enjoy 4.63 billion operations.” It also highlighted, “That's a lot of bang for the buck, particularly compared to other cloud providers. Amazon.com's AWS customers get about a third fewer operations for each dollar spent, and Microsoft about a tenth.”

Read more about the study in The Motley Fool’s article.

-Rachel

Categories: 

Pages

Subscribe to sales