cloud

August 4, 2016

Magic Quadrants, Performance Metrics & Water Cooler Discussions: Evaluating Cloud IaaS

When you make decisions about extending your infrastructure footprint into the cloud, you do so very intentionally. You hunt down analyst reports, ask peers for recommendations, and seek out quantitative research to compare the seemingly endless array of cloud-based options. But how can you be sure that you’re getting the most relevant information for your business case? Bias exists and definitions matter. So each perspective is really just a single input in the decision-making process.

The best process for evaluating any cloud solution involves four simple steps:

  1. Understand what you need.
  2. Understand what you’re buying.
  3. Understand how you’ll use it.
  4. Test it yourself.

Understand What You Need

The first step in approaching cloud adoption is to understand the resources your business actually needs. Are you looking to supplement your on-premises infrastructure with raw compute and storage power? Do your developers just need runtimes and turnkey services? Would you prefer infrastructure-abstracted software functionality?

In the past, your answers to those questions may send you to three different cloud providers, but the times are changing. The lines between “Infrastructure as a Service,” “Platform as a Service,” and “Software as a Service” have blurred, and many cloud providers are delivering those offerings side-by-side. While SoftLayer cloud resources would be considered “infrastructure,” SoftLayer is only part of the broader IBM Cloud story.

Within the IBM Cloud portfolio, customers find IaaS, PaaS, and SaaS solutions to meet their unique workload demands. From an infrastructure perspective alone, IBM Cloud offers cloud servers and storage from SoftLayer; containers, databases, deployment, and monitoring tools within Bluemix; and turnkey OpenStack private cloud environments from Blue Box. We are integrating every component of the IBM Cloud portfolio into a seamless user experience so that when a customer needs to add cognitive capabilities or a private cloud or video services to their bare metal server infrastructure, the process is quick and easy.

Any evaluation of SoftLayer as a cloud provider would be shortsighted if it doesn’t take into account the full context of how IBM Cloud is bringing together multiple unique, highly differentiated offerings to provide a dynamic, full-featured portfolio of tools and services in the cloud. And as you determine what you need in the cloud, you should look for a provider that enables the same kind of cross-functional flexibility so that you don’t end up splintering your IT environment across multiple providers.

Understand What You’re Buying

Let’s assume that you’re primarily interested in deploying raw compute infrastructure in the cloud, since that’s SoftLayer’s primary focus. The seemingly simple first step in choosing the cloud infrastructure that best meets your needs is to define what “cloud infrastructure” actually means for your business.

Technology analyst firm Gartner defines cloud IaaS as “a standardized, highly automated offering, where compute resources, complemented by storage and networking capabilities, are owned by a service provider and offered to the customer on demand. The resources are scalable and elastic in near real time, and metered by use.” While that definition seems broad, its Magic Quadrant for Cloud Infrastructure as a Service explains that when cloud resources are provisioned in “near real time,” that means they must be deployed in minutes (not hours). To be considered “metered by use,” they must be charged by the minute or hour (rather than by the month).

Given Gartner’s interpretation of “real time” and the “by use” measurement, bare metal servers that are fully configured by the customer and provisioned into a cloud provider’s data center (usually in about two hours and billed by the month) aren’t classified as cloud infrastructure as a service. That distinction is important, because many customers looking to extend workloads into the cloud are more interested in the performance of the resources than they are in provisioning times, and bare metal servers deliver better, more consistent performance than their virtualized counterparts.

The performance angle is important. Many of cloud customers need servers capable of processing large, big data workloads (data mining, numerical and seismic analysis, processing and rendering 3D video, real-time social media analysis, etc.). These types of workloads generally consist of petabytes of data, and bare metal servers are better suited for running them—and options like adding GPU cards for high performance computing make them even more enticing. The fact is that most virtualized cloud servers that can be delivered in minutes or less are not capable of handling these types of demanding workloads at all, or at least not as well as, more powerful bare metal servers that are available in just a couple of hours.

In contrast to Gartner’s definition, other analysts support the inclusion of monthly bare metal servers in cloud infrastructure decisions. In “The Truth About Price-Performance,” Frost & Sullivan explains, “Bare metal servers provide the highest levels of raw ‘throughput’ for high-performance workloads, as well as flexibility to configure storage and network resources.” And Forrester Research published a full report to address the question, “Is bare metal really ‘cloud’?” The answer was, again, a resounding yes.

Using Gartner’s definition, the majority of SoftLayer’s cloud infrastructure as a service offerings are considered “noncloud,” so they are not considered or measured in evaluations like the Magic Quadrant for Cloud IaaS. And without the majority of our business represented, the interpretation of those results may be confusing.

In practice, customers actually choose SoftLayer because of the availability of the offerings that Gartner considers to be “noncloud.” For example, Clicktale, a SoftLayer client, explains, “SoftLayer gives us the flexibility we need for demanding workloads. The amount of data we process is enormous, but SoftLayer’s bare metal machines are the best out there and we have a high level of control over them—it’s like owning them ourselves.”  

Our unique cloud platform with full support of both bare metal servers and virtual servers delivers compute resources that better suit our customers’ workloads in the cloud. Whether or not you consider those resources “cloud” is up to you, but if you opt for a more limited definition, you’ll cut out a large, important segment of the cloud market.

Understand How You’ll Use It

Once you settle on a definition of what meets your workload’s needs in the cloud, it’s important to evaluate how a given cloud resource will actually be used. Many of the factors that go into this evaluation are actually supplementary to the resource itself. Is it accessible via API? How can you connect it to your on-premises infrastructure? Will the data and workloads hosted on these resources be delivered quickly and consistently when your customers or internal teams need them?

While some of these questions are relatively easy to answer, others are nuanced. For example, SoftLayer's data center footprint continues to expand around the world, but this seemingly pedestrian process of making servers available in a new facility or geography is only part of the story. Because every new SoftLayer data center is connected to a single global network backbone that streamlines and accelerates data transfer to, from, and between servers, as our data center footprint grows, our network performance improves to and from users in that geography to SoftLayer customer servers in every other data center around the world.

And what does that underlying network architecture mean in practice? Well, we’ve run public network performance tests that show consistent results between 35 percent to 700 percent faster network speeds when compared to other “leaders” in the cloud space. Most industry reports, including Gartner’s Magic Quadrant for Cloud Infrastructure as a Service, fail to acknowledge the importance of network performance in their assessments of cloud resources, focusing instead on the features and functionality of a given offering on its own.

The underlying platform capabilities and network infrastructure that support a given cloud resource aren’t obvious when comparing the speeds and feeds of cloud server specifications. So as you evaluate a cloud provider, it’s important to look beyond “what’s in the box” to how cloud resources will actually perform, both on the server and between the server and your data’s users. And the best way to get an understanding of that performance is to run your own tests.

Test It Yourself

The process of choosing a cloud provider or adopting a specific cloud resource cannot be purely academic. The nature of cloud computing allows for on-demand deployment of resources for real-world testing at a low cost with no long-term commitments. Making a decision to go with a given cloud provider or resource based on what anyone says—be it Gartner’s MQ, Forrester, Frost & Sullivan, SoftLayer, or your nephew—could have huge implications on your business.

SoftLayer will continue working with third-party research firms to demonstrate how our cloud infrastructure delivers up to 440 percent better performance for the cost compared with our competitors, but those stats are meant to start a conversation, not end it.

We encourage prospective customers to try SoftLayer for free. You can do this by taking advantage of up to $500 in free cloud resources for a month. Put our servers and our underlying platform to the test. Then make your own assessments on the vision and execution of SoftLayer’s unique approach to cloud infrastructure as a service.

Start Building
August 1, 2016

“Lift and Shift” Existing VMware Workloads to the Public Cloud

Whatever your opinion is of IBM Cloud, the company has made tangible strides to provide a compelling hybrid cloud strategy for the enterprise. Several analysts even recently acknowledged IBM leadership in this area. Based on the recent announcement with VMware, you’ll understand why existing VMware clients are pretty excited about IBM Cloud’s hybrid strategy.

The announcement notes that SoftLayer provides the capability to create secure and flexible VMware environments on top of IBM’s public cloud—now with expanded (and cost-effective) capabilities. These capabilities allow existing VMware customers to:

  • “Lift and shift” (read: extend) existing VMware workloads to the public cloud with the associated benefits (secure, compliant, global, OPEX, and so on)
  • Take advantage of existing VMware skills, assets, and processes (scripts, VMware admins, virtual machine templates, and so on)
  • Transition to the public cloud and flexible hybrid environments with minimal disruption

High-level architectural components

Figure 1: High-level architectural components (new components are in orange)

IBM Cloud encompasses a much larger scope that includes native SoftLayer and open source options, Bluemix/PaaS, as well as extensive cloud solutions and services.

The following are VMware-related FAQs, in addition to the ones you can find on KnowledgeLayer:

Why can’t I do “lift and shift” on other cloud platforms, e.g., AWS or Microsoft Azure?

In simple terms, you’ll need access to the virtualization host in order to “fully” operate your VMware environment (as you’d be used to it from your own data center). Neither AWS nor Azure allows you this level of control; they also run different hypervisors. SoftLayer allows you to deploy and manage physical hosts in addition to standard virtual servers.

Why would I do “lift and shift” on SoftLayer and not on VMware’s own public cloud?

Performing the extension on SoftLayer lets you:

  • Choose from 28 data centers in 14 countries
  • Take advantage of SoftLayer’s unmetered private network
  • Have “full control” beyond what is specifically exposed as a “service” in vCloud (there is no access to the physical ESX hosts).

So what’s new with SoftLayer and VMware?

SoftLayer customers have deployed vSphere and vCenter on the SoftLayer cloud for some time. From personal experience, the most frequently requested additional capabilities are:

  • Ability to deploy “other” VMware components (like SRM for disaster recovery or NSX to take advantage of software-defined networking)
  • Make it cheaper and easier to deploy

VMware products available to order in the SoftLayer customer portal

Figure 2: VMware products available to order in the SoftLayer customer portal

IBM and VMware responded by introducing the following on SoftLayer:

  • New, socket-based licensing for $85 per socket per month for Enterprise Plus (includes subscription and service)
  • Selection from the “full SDDC” portfolio, including:
    • Virtual SAN Standard and Advanced
    • NSX Enterprise (software-defined networking)
    • Site Recovery Manager (DR)
    • vRealize Automation Enterprise (cloud automation)
    • VMware Integrated OpenStack (VIO)
    • vSphere E+ and vCenter Server (standard & appliance)
    • Coming soon: Horizon Suite (VDI), which was recently announced

How do I get started?

With the latest portfolio enhancements, several new assets were published (in conjunction with plans to provide automated deployments and additional services going forward). Here’s my top list:

Top Tips:

  • Get familiar with and use the certified reference design (sounds logical, but I can’t stress it enough)
  • Make sure you pick from the documented building blocks (ensures the use of certified components like the appropriate RAID controller for VSAN, and so on)
  • Keep in mind that SoftLayer is a “self-service” IaaS platform—make sure you involve a partner with good VMware skills or secure appropriate services for such a project, especially if it’s complex
  • Evaluate all SoftLayer options, e.g., “standard” virtual servers might be a better option for new, cloud-enabled workloads

     

-Andreas Groth

July 29, 2016

Use DSR to Take a Load Off Your Load Balancer

Direct server return (DSR) is a load balancing scheme that allows service requests to come in via the load balancer virtual IP (VIP). The responses are communicated by the back-end servers directly to the client. The load is taken off the load balancer as the return traffic is sent directly to the client from the back-end server, bypassing it entirely. You may want to do this if you have larger files to be served or traffic that doesn’t need to be transformed at all on its way back to the client.

Here’s how it works: Incoming requests are assigned a VIP address on the load balancer itself. Then the load balancer passes the request to the appropriate server while only modifying the destination MAC address to one of the back-end servers.

DSR workflow

You need to be aware of the following when using DSR:

  • Address resolution protocol (ARP) requests for the VIP must be ignored by the back-end servers if the load balancer and back-end servers are on the same subnet. If not, the VIP traffic routing will be bypassed as the back-end server establishes a direct connection with the client.
  • The servers handling the DSR requests must respond to heartbeat requests with their own IP and must respond to requests for content with the load balancer VIP.
  • Application acceleration is not a possibility because the load balancer does not handle the responses from the backend servers.

Here are the configuration steps for Linux and Microsoft Windows OS, as well as the NetScaler setup:

Linux configuration

  1. Create an additional loopback interface with an IP alias (the load balancer VIP is represented by x.x.x.x), use the ifconfig command:

$ ifconfig lo:1 <VirtualServiceIPAddress> broadcast x.x.x.x netmask 255.255.255.255

  1. Enter the following command to verify configuration:

            $ ifconfig lo:1

lo:1 Link encap:Local Loopback inet addr:195.30.70.200 Mask:255.255.255.255 UP LOOPBACK RUNNING MTU:3924 Metric:1

Note that if the machine reboots, this configuration will not be persistent. To set this permanently, some Linux configuration files need to be edited. Steps on how to do this vary from distribution to distribution.

  1. Disable invalid ARP replies by adding the following to the /etc/sysctl.conf file:

    net.ipv4.conf.all.arp_ignore=1

    net.ipv4.conf.eth0.arp_ignore=1

    net.ipv4.conf.eth1.arp_ignore=1

    net.ipv4.conf.all.arp_announce=2

    net.ipv4.conf.eth0.arp_announce=2

    net.ipv4.conf.eth1.arp_announce=2

Microsoft Windows configuration

Use the following steps to create the loopback interface for a Microsoft Windows OS:

  1. Click the Windows Start menu> Control Panel > Add Hardware.
  2. Click Next.
  3. Select Yes, I have already connected the hardware and click Next.
  4. Select Add a new hardware device from the installed hardware list, then click Next.
  5. Select Install the hardware that I manually select from a list and click Next.
  6. Select Network adapters and click Next.
  7. Select Microsoft from the Manufacturer list.
  8. Choose Microsoft Loopback Adapter from the Network Adapter list and click Next.
  9. Click Next two more times and then click Finish.

Configure the Virtual IP for both OS

The VIP address on the loopback interface needs to be set up with a netmask value of 255.255.255.255(/32). It should be set up without the default gateway setting.

The interface metric needs to be set to 254 in order to prevent the loopback network adapter from answering ARP requests. When setting up the IP address, do the following: Click on Advanced, uncheck Automatic metric, and set the Interface Metric to 254. (These steps are different for certain versions of Microsoft Windows; for example, in Windows Server 2012, the loopback interface is renamed to Microsoft KM-TEST Loopback Adapter.)

NetScaler configuration

There are several features that need to be enabled within NetScaler in order for DSR to work. Note that all the steps can be performed through CLI. The CLI commands are included as well.

MAC-based forwarding

DSR uses MAC-based forwarding, which needs to be enabled because it’s disabled by default. To enable MAC-based forwarding in NetScaler:

  1. Click the Configuration tab > System > Settings > Configure modes.
  2. Select the MAC-based forwarding mode and click OK.

These steps can be done through CLI as well; use the enable ns mode mbf command.

Load balancing

Next, the load balancing feature needs to be enabled because it’s disabled by default, too.

  1. Navigate to System > Settings. In Configure Basic Features, select Load Balancing.

The CLI command is enable ns feature lb.

Server object

A server-object needs to be created for each load-balanced server.

  1. Click the Configuration tab > Traffic Management > Load Balancing Servers > Add.
  2. You will need the server name and the IP address of the server.

The CLI command is add server Serverx y.y.y.y.

Services

Each service offers one or more services (such as HTTP, DNS, MySQL, and so on). NetScaler load balances traffic across services, not across servers. A service with the protocol ANY needs to be created, as well as a basic monitor, and Use Source IP (USIP) needs to be enabled. The service has to be tied to a server on a specific port (in the example, port 80).

  1.  Click the Configuration tab > Traffic Management > Load Balancing> > Services > Add.
  2. Select the appropriate services and click OK.

The CLI command is add service ANY_serverx_service serverx ANY 80 -usip Yes.

Virtual Server

A virtual server that balances traffic to one or more of the virtual services is required. The protocol chosen should be ANY (just like the service), the load balancing method is Source IP Hash and the redirection mode is MAC based (aka, MAC-based forwarding). It is recommended to make the virtual server Sessionless, as no return traffic passes the NetScaler.

  1. From the Configuration Utility, navigate to Traffic Management > Load Balancing > Virtual Servers and fill in the required fields

From the CLI, run the following commands to create the load balancing virtual server:

add lb vserver <VServer_Name> ANY <IP_Address> * -m MAC <-connfailover STATELESS>

add lb vserver DSR ANY -M MAC -connfailover stateless

add lb vserver vserver_DSR ANY 10.0.0.11 80 -lbmethod SOURCEIPHASH -m MAC -sessionless ENABLED

Be aware that for certain services (such as FTP), you need to enable connection failover, stateless.

Lastly, bind the service to the virtual server by running the following command via CLI (may not need to be done if you previously bound the services via GUI on service creation):

CLI command is bind lb vserver vserver_DSR service_Server1_ANY.

-Neb Bosworth

Categories: 
July 28, 2016

SL-APS: The faster way for resellers to offer SoftLayer services

SL-APS is a SoftLayer infrastructure application package that allows a simple and fast way for cloud service providers to offer new and existing SoftLayer services to their customers. Since it is based on APS Standard’s Application Packaging Standard (APS), you can get your Odin Cloud Marketplace Storefront up and running in a matter of a days—instead of spending months to develop and integrate—so you can sell and deploy SoftLayer services (virtual servers, bare metal servers, network devices, etc.) faster than ever before.

Putting the "r" back in fee

Providers (distributors and resellers) can download the software package from IBM free of charge. Once installed, the SL-APS package will dynamically discover SoftLayer products, pricing, and available data centers and display them in a “configurator” interface.

You're in control

Providers then customize or simplify the product set by building a list of products they wish to offer to customers. The package also accommodates Provider-Reseller-Customer Sales models as well as two-tier distribution.

IBM SL-APS v3.0 (July 2016) provides deeper integration with many of SoftLayer’s catalog items and services. New features include:

  • SAML SSO to SoftLayer Customer Portal: Odin SL-APS customers can one-click securely sign on to their SoftLayer customer portal using a strong one-time security token.
  • SoftLayer Invoice-Driven Billing: All Odin invoices relating to SoftLayer services will be generated directly based on SoftLayer invoices, on customer’s SoftLayer monthly billing date, converted for customer/reseller currency and discount rate.
  • Detailed Reseller and End-Customer Invoices: Resellers and end-customers will receive detailed invoices containing all SoftLayer devices and associated charges, converted for customer/reseller currency and discount rate.
  • POWER8 servers (additional latest SoftLayer catalog item): POWER8 servers provide bare metal power for big data workloads.

     

-Christopher

July 26, 2016

Cloud HSM: Our secure key management approach

Customers concerned about key management often require a HSM (hardware security module). They want the same level of key protection in the cloud as they do on-premises. An HSM provides guaranteed access to encrypted data by authorized users by storing mission-critical master encryption keys in HSM and backing it up. Powered by SafeNet’s HSM and hosted in geographically dispersed data centers under controlled environments independently validated for compliance, IBM Cloud HSM offers enterprises high-assurance protection for encryption keys and also helps customers meet their corporate, contractual, and regulatory compliance requirements.

You can easily order Cloud HSM through the SoftLayer customer portal or Softlayer APIs. A dedicated FIPS complaint HSM device will be provisioned inside your private network.

Your HSM access credentials that are provided to you are reset as part of your first login. This ensures that you are the only entity with access to your HSM functionality. SoftLayer is responsible for the management of the HSM in terms of health and uptime; this is done without access to the partitions, roles, keys stored and managed on the HSM. You are responsible for the use of the HSM to manage and backup the customer’s keys.

Cloud HSM supports a variety of use cases and applications, such as database encryption, digital rights management (DRM), public key infrastructure (PKI), authentication and authorization, document signing, and transaction processing. NAT and IP aliasing will not work with HSM, while BYOIP might be possible in future. Currently, HSM is not in federal data centers, but it certainly is on the roadmap.

Configuration

Cloud HSM is “used” and accessed in exactly the same way as an on-prem managed HSM.

As part of provisioning, you receive administrator credentials for the appliance, initialize the HSM, manage the HSM, create roles and create HSM partitions on the appliance. After creating HSM partitions, you can configure a Luna client (on a virtual server) that allows applications to use the APIs provided by the HSM. The cryptographic partition is a logical and physical security boundary whose knowledge is secure with the partition owner authorized by you. Any attempts to tamper the physical appliance will result in data being erased. Similarly incorrect attempts to login beyond a threshold will result in erasing partitions, hence we highly recommend backing up your keys.

Cloud HSM logical architecture

Cloud HSM logical architecture

The following diagram illustrates the roles and responsibilities of SoftLayer and the customer:

Cloud HSM roles and responsibilities of SoftLayer and the customer

Cloud HSM roles and responsibilities of SoftLayer and the customer

Cloud HSM key features

  • Secure key storage: With multiple levels of authorization and tamper proof,  FIPS 140-2 compliant hardware is provisioned in a private network in a secure data center and ensures the safety of your data. SoftLayer has no access to your keys and the device is completely owned by the customer until cancelled.
  • Reliable key storage: Customers are encouraged to back up the keys and configure HSMs in high availability mode. SoftLayer will monitor uptime and connectivity.
  • Compliance requirements: SafeNet’s FIPS 140-2 validated appliance helps you meet the requirements of many compliance standards, including PCI-DSS.
  • Improved and secure connectivity: HSMs are deployed in your private VLAN to maintain more efficient and secure connectivity. Deploying a physical HSM appliance versus software running on a general purpose server provides users with an appliance that is built to handle the resource-intensive tasks of cryptography processing while reducing the latency to applications.
  • Audit requirements: Audit logs can be found on the HSM appliance.
  • On-demand: Cloud HSM can be easily ordered and canceled using the SoftLayer customer portal or APIs and are modeled to scale rapidly. Pricing model involves one-time setup fee and recurring monthly fees.

     

-Neetu

Categories: 
July 11, 2016

Certified Ubuntu Images Available in SoftLayer

In partnership with Canonical, we are excited to announce today that SoftLayer is now an Ubuntu Certified Public Cloud Partner for Ubuntu guest images.  

For clients, this means you can harness the value of deploying Ubuntu certified images in SoftLayer. The value to our clients includes: 

  • Running Ubuntu on SoftLayer’s high performance and customizable virtual and bare metal server offerings
  • Ubuntu cloud guest image updates with enablement, publication, development, and maintenance across all data centers. Customers will have the latest Ubuntu features, compliance accreditations and security updates
  • Quality assurance ensures that customers enjoy one of the highest-quality Ubuntu experiences, including some of the fastest security patching of any Linux provider
  • Archive mirrors for faster updates retrieval for Ubuntu images
  • The opportunity to engage with Canonical for enterprise-grade support on Ubuntu cloud guest images, and use Landscape, Canonical’s award-winning system monitoring tool

In a continued effort to enhance client experience, SoftLayer’s partnership with Canonical assures clients as they look to accelerate transformation on Ubuntu workloads with a consistent SoftLayer experience.

“Canonical has a broad partnership with IBM with Ubuntu images already available on LinuxOne, Power and Z Systems,” said Anand Krishnan, EVP, Cloud, Canonical. “By signing this new public cloud partnership with SoftLayer we have made Ubuntu images available for its customers.”

Canonical continually maintains, tests, and updates certified Ubuntu images, making the latest versions available through Softlayer within minutes of their official release by Canonical. This means that you will always have the latest version of Certified Ubuntu images.

Please visit these pages for more information:

Find an Ubuntu Partner

Ubuntu Certified Public Cloud

About Canonical

Canonical is the company behind Ubuntu, the leading OS for container, cloud, scale-out and hyperscale computing. Sixty-five percent of large-scale OpenStack deployments are on Ubuntu, using both KVM and the pure-container LXD hypervisor for the world’s fastest private clouds. Canonical provides enterprise support and services for commercial users of Ubuntu.

Canonical leads the development of Juju, the model-driven operations system, and MAAS (Metal-as-a-Service), which creates a physical server cloud and IPAM for amazing data center operational efficiency. Canonical is a privately held company.

July 7, 2016

New SoftLayer Accounts Now With IBMid Authentication

Hi, and welcome to SoftLayer. We’re so happy you are joining our cloud family. For our new customers, if you haven’t heard the news, SoftLayer was acquired by IBM in 2013. With this comes transition, including the setup of an IBMid.

But this is a great news for our new customers because not only does this ID allow you to manage your SoftLayer account, but you can also access Bluemix-based services and resources by using a single sign-on. Although separate accounts, you can link your Bluemix and SoftLayer accounts. This is just a step toward providing you with an optimal IBM Cloud user experience.

Here’s what you need to know.

SoftLayer account login screen

Customers who created SoftLayer accounts after July 6, 2016 will need to follow the “IBMid Account Login” link at the bottom of the customer portal login page to use their IBMid to log in. Customers will be redirected to their Customer Portal Dashboard after their IBMid has been successfully authenticated.

Sign in to IBM

Two-Factor Authentication for IBMid Users

Customers with Two-Factor Authentication enabled will be asked to provide security code as shown below.

Two-Factor Authentication

How do I know if my account is using SoftLayer IDs or IBMids?

An IBMid is always an email address (e.g., joe@company.com). User accounts created after July 6, 2016 must follow the “IBMid Account Login” link and use their IBMid credentials, provided during their SoftLayer user creation process, to log into the SoftLayer customer portal.

If users do not know when their accounts were created and they’re using an email address to log in, they should attempt to use the SoftLayer login form first. In the future, these forms will be combined into a single one in order to simplify this experience.

Use of VPN Access and API Key

An IBMid cannot be used for VPN access. If a SoftLayer user has been granted VPN access, he or she can connect to VPN using the VPN username and password found on the customer’s profile page in the SoftLayer customer portal.

An IBMid cannot be used for API calls. If a SoftLayer user has been granted an API Key, that customer can access his or her API username and key on the profile page in the SoftLayer customer portal.

Access to VPN and API credentials has not changed for current users.

Edit User Profile

A Note to Our Current Customers

For the time being, existing accounts created prior to July 6, 2016 will continue to use the SoftLayer username and password authentication. If you have any questions, please feel free to contact your sales representative.

For more information, check out these KnowledgeLayer articles:

Reset the SoftLayer Customer Portal Password

Add a New User to a Customer Portal Account

Bluemix FAQ

Remove a User from the Customer Portal

Log in as a New User

Set Up Your Account

Customer Portal FAQ

Edit a User Profile

If you are experiencing issues with IBMid login, please email identsrv@us.ibm.com with the subject, "Problem Logging In With IBMid."

July 5, 2016

Figuring Out the “Why” of IBM

When IBM acquired SoftLayer, I felt proud. I thought, “Now we can make a difference.” Why did I feel that way, and why didn’t I think we could make a difference where we were? What brought out these feelings about IBM?

As I expand my knowledge of programming, I often come across books that don’t really pertain strictly to software development—but they pique my interest. The most recent of those is Start with Why: How Great Leaders Inspire Everyone to Take Action by Simon Sinek, suggested in a recent talk by Mary Poppendiek about leading development. Start with Why is a book about product development, leadership, and life in general. It explains why we feel the way we do about certain companies and how we should move forward to generate that feeling about ourselves and the companies we believe in.

Who cares why?

In Start with Why, Sinek talks about several different big companies, including Apple, Harley-Davidson, and Walmart. He writes that one thing that is very important when developing a product or even working in a company is to understand that company’s “why.” What makes the company tick? He says Apple has a clear message about this: “to start a revolution.” He claims Apple is clear as to why they do what they do and it has formed a culture of people around it that cares more about that message than any one product they sell. The products, in turn, embody that message, as do Apple employees. This is why, when Apple decided to move into the phone, tablet, and music industry, rather than focus only computers and hardware, their customers moved with them. Although the differences between an Apple iPad and a Dell tablet might be small, Apple consumers like feeling that they are part of the Apple society, so they choose what they know and love, based on their gut instinct.

Think now about Harley-Davidson. Many of its customers have tattoos with the Harley-Davidson logo, because those customers identify with the lifestyle that Harley-Davison projects—a statement more about the person than the company. It says, “I am a Harley-Davison type of person.” Mitsubishi or Kawasaki could have similar bikes—of even better quality and cheaper prices—but that customer is choosing Harley-Davidson. They have made a lifetime commitment to a brand because they identify with the iconography and want to be a part of the society that is Harley-Davidson.

What is IBM’s “why”?

I applied the idea of “why” to my work and my company, bringing up the question, “What is IBM’s ‘why’?” In pursuit of this question, I searched “Why IBM?” on the IBM intranet. Luckily, there was a document meant for sales reps to help define IBM for new customers with the following on the first slide:

“IBM is a global information technology services company operating in over 170 countries. We bring innovative solutions to a diverse client base to help solve some of their toughest business challenges. In addition to being the world’s largest information technology and consulting services company, IBM is a global business and technology leader, innovating in research and development to shape the future of society at large.”

I dissected this blurb, pulling out the parts which describe IBM. I ended up with this:

  • IBM is large (the world’s largest)
  • IBM is global (diverse, international, in more than 170 countries)
  • IBM is business-oriented (solves business challenges)
  • IBM is a technology leader (innovative, focus on research and development)
  • IBM is shaping the future of society at large

Then I put it together into a single sentence:

“IBM is a large, global, business-oriented technology leader, shaping the future of society at large.”

That is when I realized that I was too focused on IBM’s “what,” so I removed everything that focused too heavily on the subject of the sentence (IBM) and focused my attention instead on the predicate. This left me with a single, easy sentence answering the questions: “Why is IBM?”, “What is its function?”, and “What are we trying to do?”.

“IBM is shaping the future of society at large.”

This is why IBMers get up in the morning. This is why we work hard. This is what we are hoping to accomplish in our own lives.

Simon Sinek states, “The 'why' comes from looking back.” Every person or company’s achievement should prove the “why”—so how do we prove IBM’s “why”? Let’s take a look at some of our victories in the past and present and compare.

In 1937, IBM’s tabulating equipment helped maintain employment records for 26 million people in support of the Social Security Act. In 1973-1974, IBM developed the Universal Product Code and released systems to support bar code scanning and automatic inventory maintenance. In a recent employee webcast, IBM’s senior vice president of Global Technology Services Martin Jetter communicated the idea, “We are the backbone of he world’s economy.” His supporting comments included our footprint in the airline industry, stating, “We manage the systems that support 25 percent of the total paid-passenger miles flown globally.” He also said, “Our services support 60 percent of daily card transactions in banking, 53 percent of mobile connections worldwide in telecom, and 61 percent of passenger vehicles produced in the auto industry.”

Lately, IBM brought attention to its revolutionary AI, better known as Watson, and is ushering in the idea of cognitive business analytics. In my opinion, these things prove that we are invested in shaping the future of a global society.

What does this mean about IBM? What does this mean about me?

I can’t speak for IBM as a whole, but I can talk about myself. I want to be a part of something bigger than myself; I want to contribute in a meaningful way, and understand what that contribution meant. I believe in a global society; we are all in this world together and I feel like there are more important issues that we can deal with other than our differences. I want to lead, or be a part of a team that leads; I strive to be successful. I am not OK with the status quo; I believe there is a better way. I have hope for the future. I don’t want to start a revolution. I want to be a part of something more pervasive, an underlying foundation that helps society thrive—not just changing society for the sake of change. I want to help lay a foundation that allows it to thrive and grow into something better. I believe that IBM identifies these goals, and projects this same message—a message that resonates with me at a very basic level. It sums up why I am proud to be an IBMer.

What about you?

“I am an IBMer” is not a sentiment that only employees need. In fact, it should go well beyond being employed at IBM. Our customers should feel the sentiment as well. Even people completely unaffiliated with IBM should be able to say, “I am an IBMer,” meaning that they believe in the same dream—the dream of a global society, working together to meet global goals; a dream about the future of society at-large.

What does IBM mean to you? Are you an IBMer too?

-Kevin Trachier

Categories: 
June 30, 2016

HA, DR, GSLB, LB: The What’s What and Who’s Who of Uptime

As a SoftLayer sales engineer, I get the opportunity to talk to a wide range of customers on a daily basis about almost everything under the sun. This is one of my favorite parts of working at SoftLayer: every day is unique and the topics range from a standalone LAMP server to thousands of servers in a big data cluster—and everything in between. It can be challenging at times, due to the infinite number of solutions that SoftLayer can run, but it also gives me the chance to learn and teach others. In this blog post, I’ll discuss high availability (HA), disaster recovery (DR)global server load balancing (GSLB), and load balancing (LB), as I occasionally hear customers mix up the terms, and I think a little clarity on the topics could help.

Before we dive into the differences, let’s define each in alphabetical order (I did take a stab at stating this in my own words, but Wikipedia does such a good job that I paraphrased from its descriptions and added in a little more context).

  • High availability (HA): HA is a characteristic of a system, which aims to ensure an agreed level of operational performance for a higher than normal period. There are three principles of system design in high availability engineering: the elimination of single points of failure (SPOF), reliable failover, and failure detection.
  • Disaster recovery (DR): DR involves a set of policies and procedures to enable the recovery or continuation of systems following a natural or human-induced disaster. Disaster recovery focuses on keeping all essential aspects of a business functioning despite significant disruptive events.
  • Global server load balancing (GSLB): GSLB is a method of splitting traffic across multiple servers using DNS and geographical locations as the means to determine where request traffic will be sent.
  • Load balancing (LB): LB is a way to distribute processing and communications evenly across multiple servers within a data center so that a single device does not carry an entire load. LB is essential in situations where it is difficult to predict the number of requests issued to a server, and it can distribute requests that would have been made to a single server to ease the load and minimize latency and other issues.

Now that we've defined each of these topics, let’s quickly check off the main points of each topic:

HA

  • No single points of failure (SPOF)
  • Each component of a system has as at least one failover node

Hardware Recommendations

  • If a server is part of an HA pair, it is recommended to run the OS on at least a RAID 1 group and DATA partitions on a RAID 1, 5, 6,10, or higher group
  • If the system is part of a cluster, it is always recommended to run the OS on at least a RAID 1 and DATA partitions can be optimized for storage capacity 
  • Redundant power

Network Recommendations

  • Dual path networking/uplinks
  • Utilize portable IP addresses for HA/service configurations as primary IPs assigned directly to a server or VLAN is specific to that instance and can lead to IP conflicts or unintended disruption in service
  • Database systems are configured at the application for HA or clustering
  • Web/app systems are configured at the OS or app in a HA pair or are placed behind a load balancer

DR

  • Companies should analyze their infrastructure and personnel assignment to identify mission-critical system components and personnel
  • A plan should be developed to identify and recover from a disaster; this plan should also include recovery time objective (RTO) and recovery point objective (RPO) to reflect the business model
  • A secondary data center is recommended to mitigate risks of a major natural or human disaster
  • Mission-critical systems should be on standby or quickly deployable to meet or beat a company’s stated RTO
  • Backup data should be stored offsite and ideally at the secondary DR site to reduce recovery time
  • Once a plan is in place, mock fail-overs should be performed regularly to ensure the DR plan is fully executable and all parties understand their roles

GSLB

  • Complete, independent systems should be deployed into two or more DC locations
  • Each location is accessible via a unique IP address(es)
  • Data systems should be designed to operate regionally independent and possibly synchronized on-schedule or on-demand
  • Each location hosts at least one LB instance that supports GSLB
  • Based on availability of each site, the location of a user, or data sovereignty regulations, users are directed to an available site via DNS resolution
  • Once a user has been directed to a site, standard load balancing takes precedence until the time to live (TTL) of the DNS resolution expires

LB

  • Each server within a LB pool should reside in the same DC as the LB, or performance may degrade and health checks may fail
  • A minimum of two servers should be included in a LB pool
  • Load should be spread across servers based on the specification of each server; if all servers are equal in specs, the load should be shared equally
  • Each server in a LB pool will need a public IP address and active public interface to respond to Internet requests
  • When possible, it is recommended to leverage LB features such as SSL offload to minimize load on web servers

I hope this clarifies the terms and uses of HA, DR, GSLB, and LB. Without background, tech jargon can be a bit ambiguous. In this case, some of the terms even share some of the same acronyms, so it’s easy to mix them up. If you haven't had a chance to kick the tires of the SoftLayer LB offerings or if you’re looking to build a DR solution on SoftLayer, just let us know. We’ll be happy to dive in and help you out.

- JD

 

Categories: 
June 27, 2016

Disaster Recovery in the Cloud: Are You Prepared?

While the importance of choosing the right disaster recovery solution and cloud provider cannot be understated, having a disaster recovery runbook is equally important (if not more). I have been involved in multiple conversations where the customer’s primary focus was the implementation of the best-suited disaster recovery technology, but conversation regarding DR runbook was either missing completely or lacked key pieces of information. Today, my focus will be to lay out a frame work for what your DR runbook should look like.

“Eighty percent of businesses affected by a major incident either never re-open or close within 18 months.” (Source: Axa Report)

What is a disaster recovery runbook?

A disaster recovery runbook is a working document that outlines a recovery plan with all the necessary information required for execution of this plan. This document is unique to every organization and can include processes, technical details, personnel information, and other key pieces of information that may not be readily available during a disaster situation.

What should I include in this document?

As previously stated, a runbook is unique to every organization depending on the industry and internal processes, but there is standard information that applies to all organizations and should be included in every runbook. Below is a list of the most important information:

  • Version control and change history of the document.
  • Contacts with titles, phone numbers, email addresses, and job responsibilities.
  • Service provider and vendor list with point of contact, phone numbers, and email addresses.
  • Access Control List: application/system access and physical access to offices/data centers.
  • Updated organization chart.
  • Use case scenarios based on DR testing, i.e., what to do in the event of X, and the chain of events that must take place for recovery.
  • Alert and custom notifications/emails that need to be sent for a failure or DR event.
  • Escalation procedures.
  • Technical details and explanation of the disaster recovery solution (network layouts, traffic flows, systems and application inventory, backup configurations, etc.).
  • Application-based personnel roles and responsibilities.
  • How to revert back and failover/failback procedures.

How to manage and execute the runbook

Processes, applications, systems, and employees can all change on a daily basis. It is essential to update this information in the DR runbook on a regular basis to ensure the accuracy of the document.

All relevant employees should receive DR training and should be well informed of their roles and responsibilities in a DR event. They should be asked to take ownership of certain tasks, which should be well documented in the runbook.

In short, we all hope to avoid a disaster. But when it happens, we must be prepared to tackle it. I hope the information above will be helpful in taking the first step towards preparing a DR runbook. Please feel free to contact me for additional information or guidance.

-Zeb

 

Pages

Subscribe to cloud