partner-marketplace

June 30, 2016

HA, DR, GLSB, LB: The What’s What and Who’s Who of Uptime

As a SoftLayer sales engineer, I get the opportunity to talk to a wide range of customers on a daily basis about almost everything under the sun. This is one of my favorite parts of working at SoftLayer: every day is unique and the topics range from a standalone LAMP server to thousands of servers in a big data cluster—and everything in between. It can be challenging at times, due to the infinite number of solutions that SoftLayer can run, but it also gives me the chance to learn and teach others. In this blog post, I’ll discuss high availability (HA), disaster recovery (DR)global server load balancing (GSLB), and load balancing (LB), as I occasionally hear customers mix up the terms, and I think a little clarity on the topics could help.

Before we dive into the differences, first let’s first define each in alphabetical order (I did take a stab at stating this in my own words, but Wikipedia does such a good job that I paraphrased from its descriptions and added in a little more context).

  • High availability (HA): HA is a characteristic of a system, which aims to ensure an agreed level of operational performance for a higher than normal period. There are three principles of system design in high availability engineering: the elimination of single points of failure (SPOF), reliable failover, and failure detection.
  • Disaster recovery (DR): DR involves a set of policies and procedures to enable the recovery or continuation of systems following a natural or human-induced disaster. Disaster recovery focuses on keeping all essential aspects of a business functioning despite significant disruptive events.
  • Global server load balancing (GSLB): GSLB is a method of splitting traffic across multiple servers using DNS and geographical locations as the means to determine where request traffic will be sent.
  • Load balancing (LB): LB is a way to distribute processing and communications evenly across multiple servers within a data center so that a single device does not carry an entire load. LB is essential in situations where it is difficult to predict the number of requests issued to a server, and it can distribute requests that would have been made to a single server to ease the load and minimize latency and other issues.

Now that we've defined each of these topics, let’s quickly check off the main points of each topic:

HA

  • No single points of failure (SPOF)
  • Each component of a system has as at least one failover node

Hardware Recommendations

  • If a server is part of an HA pair, it is recommended to run the OS on at least a RAID 1 group and DATA partitions on a RAID 1, 5, 6,10, or higher group
  • If the system is part of a cluster, it is always recommended to run the OS on at least a RAID 1 and DATA partitions can be optimized for storage capacity 
  • Redundant power

Network Recommendations

  • Dual path networking/uplinks
  • Utilize portable IP addresses for HA/service configurations as primary IPs assigned directly to a server or VLAN is specific to that instance and can lead to IP conflicts or unintended disruption in service
  • Database systems are configured at the application for HA or clustering
  • Web/app systems are configured at the OS or app in a HA pair or are placed behind a load balancer

DR

  • Companies should analyze their infrastructure and personnel assignment to identify mission-critical system components and personnel
  • A plan should be developed to identify and recover from a disaster; this plan should also include recovery time objective (RTO) and recovery point objective (RPO) to reflect the business model
  • A secondary data center (DC) [Office1] is recommended to mitigate risks of a major natural or human disaster
  • Mission-critical systems should be on standby or quickly deployable to meet or beat a company’s stated RTO
  • Backup data should be stored offsite and ideally at the secondary DR site to reduce recovery time
  • Once a plan is in place, mock fail-overs should be performed regularly to ensure the DR plan is fully executable and all parties understand their roles

GSLB

  • Complete, independent systems should be deployed into two or more DC locations
  • Each location is accessible via a unique IP address(es)
  • Data systems should be designed to operate regionally independent and possibly synchronized on-schedule or on-demand
  • Each location hosts at least one LB instance that supports GSLB
  • Based on availability of each site, the location of a user, or data sovereignty regulations, users are directed to an available site via DNS resolution
  • Once a user has been directed to a site, standard load balancing takes precedence until the time to live (TTL) of the DNS resolution expires

LB

  • Each server within a LB pool should reside in the same DC as the LB, or performance may degrade and health checks may fail
  • A minimum of two servers should be included in a LB pool
  • Load should be spread across servers based on the specification of each server; if all servers are equal in specs, the load should be shared equally
  • Each server in a LB pool will need a public IP address and active public interface to respond to Internet requests
  • When possible, it is recommended to leverage LB features such as SSL offload to minimize load on web servers

I hope this clarifies the terms and uses of HA, DR, GSLB, and LB. Without background, tech jargon can be a bit ambiguous. In this case, some of the terms even share some of the same acronyms, so it’s easy to mix them up. If you haven't had a chance to kick the tires of the SoftLayer LB offerings or if you’re looking to build a DR solution on SoftLayer, just let us know. We’ll be happy to dive and help you out.

- JD

 

Categories: 
June 27, 2016

Disaster Recovery in the Cloud: Are You Prepared?

While the importance of choosing the right disaster recovery solution and cloud provider cannot be understated, having a disaster recovery runbook is equally important (if not more). I have been involved in multiple conversations where the customer’s primary focus was the implementation of the best-suited disaster recovery technology, but conversation regarding DR runbook was either missing completely or lacked key pieces of information. Today, my focus will be to lay out a frame work for what your DR runbook should look like.

“Eighty percent of businesses affected by a major incident either never re-open or close within 18 months.” (Source: Axa Report)

What is a disaster recovery runbook?

A disaster recovery runbook is a working document that outlines a recovery plan with all the necessary information required for execution of this plan. This document is unique to every organization and can include processes, technical details, personnel information, and other key pieces of information that may not be readily available during a disaster situation.

What should I include in this document?

As previously stated, a runbook is unique to every organization depending on the industry and internal processes, but there is standard information that applies to all organizations and should be included in every runbook. Below is a list of the most important information:

  • Version control and change history of the document.
  • Contacts with titles, phone numbers, email addresses, and job responsibilities.
  • Service provider and vendor list with point of contact, phone numbers, and email addresses.
  • Access Control List: application/system access and physical access to offices/data centers.
  • Updated organization chart.
  • Use case scenarios based on DR testing, i.e., what to do in the event of X, and the chain of events that must take place for recovery.
  • Alert and custom notifications/emails that need to be sent for a failure or DR event.
  • Escalation procedures.
  • Technical details and explanation of the disaster recovery solution (network layouts, traffic flows, systems and application inventory, backup configurations, etc.).
  • Application-based personnel roles and responsibilities.
  • How to revert back and failover/failback procedures.

How to manage and execute the runbook

Processes, applications, systems, and employees can all change on a daily basis. It is essential to update this information in the DR runbook on a regular basis to ensure the accuracy of the document.

All relevant employees should receive DR training and should be well informed of their roles and responsibilities in a DR event. They should be asked to take ownership of certain tasks, which should be well documented in the runbook.

In short, we all hope to avoid a disaster. But when it happens, we must be prepared to tackle it. I hope the information above will be helpful in taking the first step towards preparing a DR runbook. Please feel free to contact me for additional information or guidance.

-Zeb

 

June 23, 2016

Meet the Integrated IBM Cloud Platform: SoftLayer and Bluemix

Did you know that you can complement your SoftLayer infrastructure with IBM Bluemix platform-as-a-service? (Read on—then put these ideas into practice with a special offer at the end.)

When you pair Bluemix with SoftLayer, you can buy, build, access, and manage the production of scalable environments and applications by using the infrastructure and application services together. 

Whether you need insight on the effectiveness of a multimedia campaign, need to process vast amounts of data in real-time, or want to deploy websites and web content for millions of users, you can create a better experience for your customers by combining the power of your SoftLayer infrastructure with Bluemix.

Bluemix solutions and services allow you to:

  • Optimize campaigns in real-time based on customer reaction using Watson Personality Insights and Insights for Twitter.
  • Run scalable analytics using Streaming Analytics to retrieve results in seconds.
  • Improve outcomes with Watson Alchemy API and Retrieve and Rank paired with high performance bare metal servers.
  • Automate hundreds of daily web deployments using SoftLayer and Bluemix APIs.
  • Securely store, analyze, and process big data using Cloudant database service with Apache Spark.

You can see the value of an integrated SoftLayer/Bluemix experience by looking at insights and cognitive, big data and analytics, and web applications.

Insights and Cognitive

Forty-four percent of organizations say customer experience will be the primary way they seek to differentiate from competitors.

The scenario: Marketing organizations and advertising agencies want to release a large, worldwide marketing campaign, complete with embedded ads. With the explosive growth of mobile, social, and video, those ads are often image- and video-intensive. Not only are these enterprises worried about how to run such a high-performing workload where customer data needs to stay in-country, but they have no idea how effective their campaign will be—and whether those receiving it are the users they’re trying to target—until it’s too late.

The solution: A media-rich campaign workload can run on high-performing bare metal servers in SoftLayer data centers. Cognitive services are added to understand in real-time the impact of campaign and target customers, whose personal data is stored in proximity to the user.

  • SoftLayer bare metal servers run media-rich (video, image) campaign workloads.
  • Bluemix’s Insights for Twitter service is used to understand in real-time the impact of the campaign.
  • Watson’s Personality Insights allows you to see, based on 40 calculated attributes, if users viewing ads match the target customers.
  • Globally diverse block storage enables data storage across the world.

Personality portrait

Big Data and Analytics

The value of data decreases over time. On average, it takes two weeks to analyze social data.

The scenario: Customers need to harness vast amounts of data in real-time. The problem is many data streams come too fast to store in a database for later analysis. Further, the analysis needs to be done NOW. From social media, consumer video, and audio, to security cameras, businesses could win or lose by being the first to discover essential patterns from these real-time feeds and act upon them.

The solution:  Customers can use Streaming Analytics and get results in seconds, not hours. Alchemy API and Retrieve and Rank services can improve decisions and outcomes all from bare metal servers with scalable IBM Containers.

•       Streaming Analytics can run scalable analytics solutions and get results in seconds, not hours.

•       Patterns that are found can be stored with the associated stream content in object storage and transferred around the world using CDN to be co-located with their customers.

•       Watson’s Retrieve and Rank service can improve decisions and outcomes.

•       Run services from high-performing, low-latency bare metal servers that can scale as activity swells using IBM Containers.

Hadoop, data warehouse, NOSQL diagram

Web Application

It can take several weeks for a DBMS instance to be provisioned for a new development project, which limits innovation and agility.

The scenario: Customers deploying websites and web content for millions of users need fast infrastructure and services so they can focus on their users, not spend their time managing servers and infrastructure. This is especially true for commerce sites that need to be constantly available for orders. These also need a reliable database to securely store the data. The problem is these customers do not want to manage their database, and need an infrastructure provider that is worldwide, reliable, and screaming fast.

The solution: Customers can host web applications on VMs and bare metal with a broad range of needs, including sites that require deep data analysis. Apache Spark can be used to spin up in-memory computing to analyze Cloudant data and return results 100x faster to the user.

  • Automate hundreds of web deployments using SoftLayer APIs.
  • Cloudant DB offloads DB management, reallocates budget from admins to application developers.
  • Apache Spark analyzes Cloudant data 100 times faster using in-memory computing cluster.
  • Bare metal servers provide a high-performing environment for the most stringent requirements.
  • Load balancers manage traffic, helping to ensure uptime.
  • Virtual servers with the Auto Scale service grow and shrink environment to consistently meet needs of application without unnecessary expenditures.
  • Object storage open APIs speed worldwide delivery via CDN.

Cloudant diagram

Exciting Offer

Put these ideas into practice by trying Bluemix today. To get you started, we are offering you a $200 Bluemix spending credit for 30 days when you link your SoftLayer account with a Bluemix account. When you link your Bluemix and SoftLayer billing accounts, you receive a $200 credit toward Bluemix usage. The credit must be used within 30 days of linking the accounts.

Follow these easy instructions to get started:  

  • Visit the SoftLayer customer portal and log into your account.
  • Open a ticket to request the ability to enable the ability to link your Bluemix account.
  • Once activated, the “Link a Bluemix Account” button will appear at the top of the SoftLayer customer portal page.
  • Click on the “Link a Bluemix Account” button. 
  • Follow the on-screen instructions to link your SoftLayer account to a Bluemix account.

This offer expires on December 30, 2016.

Learn More

Bluemix Intro Demo

Watson Personality Insights

Real Time Streaming Analysis

Hybrid Data Warehouse



 

-Thomas Recchia

June 20, 2016

VMware on SoftLayer Just Got Even Easier

SoftLayer customers have been bringing VMware workloads and VMware add-ons to the infrastructure as a service (IaaS) platform for years. With the roll-out of per-processor monthly licensing and the automation of vSphere and vCenter deployment, the provisioning process has never been easier. 

Now SoftLayer has taken the next step by allowing customers to order and manage VMware add-ons with the same per-processor monthly pricing model. To celebrate, the sales engineering team has updated KnowledgeLayer and added a new section focused on VMware 6, including step-by-step guides for getting started on the platform. VMware vSphere 6 Getting Started, for example, details how to get vSphere servers up and running. It gives a detailed instructions on how to create from scratch, what VLAN and IP addresses customer should use, and the recommended network structure.  

Let’s review what else is new.

SoftLayer has added the vCenter Server Appliance to the catalog to allow customers to fully scale their environments up on their own. We’ve also added instructions on how you can deploy vCenter as an appliance. For smaller environments, customers can still deploy vCenter as a Windows add-on and get up and running in under an hour.

To make the vCenter appliance and other add-ons possible, SoftLayer has enhanced the customer portal to allow customers to order and manage all VMware licensing add-ons in a simple panel. Customers use this system to order and manage licenses for vCenter Server Appliance, Virtual SAN, NSX-V, Site Recovery Manager, and vRealize Operations/Automation/Log Insight. Combined with speedy SoftLayer bare metal server provisioning times, customers can stand up or extend their VMware footprint across the globe in no time.

VMware NSX on SoftLayer is nothing new, but the capabilities of the latest version and the month-to-month pricing make it an option worth considering. Between the edge gateways and distributed networking enhancements, customers can build security and standardization into the platform that follows their workloads from server to server and site to site. Customers can span a private layer 2 domain across completely different locations by using a VXLAN overlay across a layer 3 routed network. This is particularly useful for disaster recovery and for bursting on-premises workloads out to SoftLayer. Customers also leverage NSX to isolate workloads in a multi-tenant environment without the need for additional VLANs from SoftLayer. VMware 6 NSX Getting Started is your first stop to learn about micro-segmentation and best practices with NSX at SoftLayer.

VMware Virtual SAN is our latest addition to the platform and provides customers with a great option for hosting mission-critical workloads on single-tenant infrastructure with software-defined storage (SDS). Customers can leverage common x86 compute available on SoftLayer to build reliable, high performance, and scalable dedicated storage pools. It was designed for performance (caching and local disk access), affordability (mixing solid state and capacity SATA drives), and supportability without the need for a storage architect. It is tightly integrated with vSphere administration and brings features like snapshots, linked clones, vSphere Replication, and vSphere APIs for data protection. 

If you have questions about VMware on the SoftLayer cloud, get in touch with our sales representatives on live chat or phone. They’ll be happy to help and can also coordinate a consultation with the SoftLayer sales engineering team if you need one. You may find some of your initial questions have already been answered in our VMware FAQ.

I’m also delighted to share some video tutorials our sales engineering team created, entitled, “Getting Started With VMware 6.0 (Parts 1, 2, 3, 4).” This series will give you examples of deploying VMware and get some of your initial questions answered.

With that said, why not start deploying your VMware solution—or expanding your current VMware workloads with feature rich add-ons? Now is the best time for you to take advantage of our promotion to spin up your VMware solution at SoftLayer. Ask a SoftLayer sales representative on live chat to get more details.

-Rick Ji

June 16, 2016

Larger Virtual Servers Now Available

You asked. We listened. We’re excited to announce that our clients can now provision virtual servers with more cores and more RAM.

Starting today, you’re now empowered to run high compute and in-memory intensive workloads on a public and private cloud with the same quick deployment and flexibility you’ve come to enjoy from SoftLayer. After all, you shouldn’t have to choose between flexibility and power.

Oh, and did we mention it’s all on demand? Deploy these new, larger sizes rapidly and start innovating—right now.

Whether you require a real-time analytics platform for healthcare, financial, or retail, these larger virtual servers provide the capabilities you need to harness and maximize analytics-driven solutions.

Popular use cases for larger virtual servers include real-time big data analytics solutions requiring millisecond execution as needed by organizations processing massive amounts of data, like weather companies. Given the immense amount of meteorological inputs required for any location, at any time, at millisecond speed, larger virtual server sizes power weather forecast responses in real-time.

With SoftLayer virtual servers, you can segment your data across public, private, and management networks for better reliability and speed. You get unmetered bandwidth across our private and management networks at no additional charge, and unmetered inbound bandwidth on our public network. As real-time data-intensive workloads are developed, SoftLayer ensures that our best-in-class network infrastructure can retrieve and move data with speed.

New Sizes

Drum roll, please! Our newest offerings include:

Public virtual servers

Private virtual servers

Public virtual servers will be customizable, but will have limitations on various core/RAM ratios. Private nodes will provide complete customization.

Cores, RAM, storage

With the introduction of larger virtual servers, SoftLayer will also reconfigure socket/core ratios. The number of cores per socket is reflected below for newly deployed virtual servers:

Core:Socket Ratios

For clients using third-party software on virtual servers, it is recommended that you work with your software vendor to ensure socket-based licensing is properly licensed. 

Data Center Availability

Currently, larger public and private virtual servers will only be available in select data centers, with more coming online in the near future. The following locations will offer public and private virtual server combinations configured with more than 16 cores or more than 64 GB RAM:

Locations of larger public and private virtual servers

For more information on virtual servers and for pricing, read here.

We are always interested to see how you are flying in the cloud and how these larger virtual servers help drive value for your business. Please connect with us on Twitter: @milan3patel and @conradjjohnson.

-Milan Patel

Categories: 
June 3, 2016

Mount SoftLayer Object Storage in a Docker Container

The popularity of Docker containers has many organizations wanting to host containers in their cloud environments. They’re looking for ways to “marry” their existing cloud storage options with Docker containers, which offers application portability. SoftLayer offers persistent data (structured or unstructured) with its object, file, and block storage.

Of the three storage options, object storage is usually more popular in the cloud world as a pay-as-you-go option. It provides persistent storage for numerous workloads with image, video, and audio files, such as mobile and web applications. Combine persistence with the power of Docker containers, and the result is a highly portable and flexible application platform on the cloud. I’d like to showcase mounting SoftLayer object storage inside a Docker container using Cloudfuse. This example can, of course, be extended for further automation of the mount process as needed.

The following are steps for mounting object storage to a Docker container:

  1. Know your SoftLayer object storage credentials, which can be retrieved from your SoftLayer account.
username (Your SoftLayer Object Store Username or password string)
api_key (Your SoftLayer API Key
authurl (Authorization URL of the data center where your object store is hosted)
  1. Install Docker on your host machine. Click here for installation instructions.

     
  2. Create a new folder named SLObjectStoreTest and make it your current directory.

     
  3. Copy the following into a file named Dockerfile and store it in the SLObjectStoreTest folder. You can also clone it from GitHub.
# Dockerfile : Mount SoftLayer Object Store inside a container
# Version 1.1
 
# Pull base images
FROM ubuntu
 
# Set working directory
WORKDIR /root
 
# Install Python
RUN apt-get update && \
apt-get -y upgrade
 
# Install pip
RUN apt-get install -y python-pip && \
pip install softlayer-object-storage
 
# Install cloudfuse
RUN apt-get install -y build-essential libcurl4-openssl-dev libxml2-dev libssl-dev libfuse-dev && \
apt-get install -y curl && \
curl -L https://github.com/redbo/cloudfuse/tarball/master > cloudfuse.tar && \
tar -xzvf cloudfuse.tar && \
apt-get install -y libjson0 libjson0-dev && \
cd redb* && \
./configure && \
make && \
make install
ENTRYPOINT [/bin/bash"]
 
# Build the Docker image from the Dockerfile
$docker build

You should see the Docker image being built. It will take a couple of minutes.

  1. Check that the image exists once it’s built by typing $docker images.

     
  2. Use the following command to spin up a Docker container from this image:

docker run –cap-add SYS_ADMIN –privileged –device /dev/fuse:/dev/fuse:mrw -i -t <imageid></imageid>

You should see the bash command of the Docker container.

  1. Create a new folder where the SoftLayer Object Storage should be mounted, e.g.,

mkdir /storage

  1. Create a new file in /root directory named .cloudfuse.
  2. Enter your SoftLayer object storage credentials (from Step 1) in the .cloudfuse file like below :
username (Your SoftLayer Object Store Username)
api_key (Your SoftLayer API Key or password string)
authurl (Authorization URL of the data center where your object store is hosted)
  1. Mount the SoftLayer object storage at /storage by running

cloudfuse /storage

You should see your SoftLayer object store mounted at /storage in your Docker container!

You can now configure this image to run your application, which can leverage this container—or use the container as a Docker volume container, composed with other containers running your application.

In case you want to experiment with an already built Docker image, you can pull it from the softlayerobjectstore_mount repository.

-Sravan K Yallapragada

Categories: 
June 1, 2016

For a Limited Time Only: Free POWER8 Servers

So maybe you’ve heard that POWER8 servers are now available from SoftLayer. But did you know you can try them for free?

Yep. That’s right. For. Free.

Even better: We’re excited to extend this offer to our new and existing customers. For a limited time only, our customers can take up to $2,238 off their entire order.

That’s a nice round number. (Not!)

I bet you’re wondering how  we came up with that number. Well, $2,238 gets you the biggest, baddest POWER8-est machine we offer: POWER8 C812L-SSD, loaded with 10 cores, 3.49GHz, 512GB RAM, and 2x960GB SSDs. Of course, if you don’t need that much POWER (pun intended), we offer three other configs that might fit your lifestyle a little bit better. Check them out here.

 

For a limited time only, our customers can take up to $2,238 off their entire POWER8 order.

 

Oh, and the not-so-fine print (as if I have to say it, but legal told me I had to, so…): This offer is good only on POWER8 servers. (Duh!) The offer expires July 31, 2016. You’re limited to one promo code use per customer only. Customers take up to $2,238 off the first order in the first billing cycle of your POWER8 server (which means order at the beginning of the month to take full advantage of the offer; if you wait till the 20th of the month, you only get it for 10 days—11 depending on whether the month has 30 or 31 days, but I digress). POWER8 is currently only rocking out in DAL09. This offer cannot be combined with any other offers, and SLIC accounts are not eligible.

For more information on this offer, please check out the FAQ or contact a sales representative. POWER up!

May 27, 2016

Data Security and Encryption in the Cloud

In Wikipedia’s words, encryption is the process of encoding messages or information in such a way that only authorized parties can read it. On a daily basis, I meet customers from various verticals. Whether it is health care, finance, government, technology, or any other public or privately held entity, they all have specific data security requirements. More importantly, the thought of moving to a public cloud brings its own set of challenges around data security. In fact, data security is the biggest hurdle when making the move from a traditional on-premises data center to a public cloud.

One of the ways to protect your data is by encryption. There are a few ways to encrypt data, and they all have their pros and cons. By the end of this post, you will hopefully have a better understanding of the options available to you and how to choose one that meets your data security requirements.

Data “At Rest” Encryption

At rest encryption refers to the encryption of data that is not moving. This data is usually stored on hardware such as local disk, SAN, NAS, or other portable storage devices. Regardless of how the data gets there, as long as it remains on that device and is not transferred or transmitted over a network, it is considered at rest data.

There are different methodologies to encrypt at rest data. Let’s look at the few most common ones:

Disk Encryption: This is a method where all data on a particular physical disk is encrypted. This can be done by using SED (self-encrypting disk) or using a third party solutions from vendors like Vormetric, SafeNet, PrimeFactors, and more. In a public cloud environment, your data will most likely be hosted on a multitenant SAN infrastructure, so key management and the public cloud vendor’s ability to offer dedicated, local, or SAN spindles becomes critical. Moreover, keep in mind that using this encryption methodology does not protect data when it leaves the disk. This method may also be more expensive and may add management overhead. On the other hand, disk encryption solutions are mostly operating system agnostic, allowing for more flexibility.

File Level Encryption: File level encryption is usually implemented by running a third-party application within the operating system to encrypt files and folders. In many cases, these solutions create a virtual or a logical disk where all files and folders residing in it are encrypted. Tools like VeraCrypt (TrueCrypt’s successor), BitLocker, and 7-Zip are a few examples of file encryption software. These are very easy to implement and support all major operating systems.  

Data “In Flight” Encryption

Encrypting data in flight involves encrypting the data stream at one point and decrypting it at another point. For example, if you replicate data across two data centers and want to ensure confidentiality of this exchange, you would use data in flight encryption to encrypt the data stream as it leaves the primary data center, then decrypt it at the other end of the cable at the secondary data center. Since the data exchange is very brief, the keys used to encrypt the frames or packets are no longer needed after the data is decrypted at the other end so they are discarded—no need to manage these keys. Most common protocols used for in flight data encryption are IPsec VPN and TLS/SSL.

And there you have it. Hopefully by now you have a good understanding of the most commonly encryption options available to you. Just keep in mind that more often than not, at rest and in flight encryption are implemented in conjunction and complement each other. When choosing the right methodology, it is critical to understand the use case, application, and compliance requirements. You would also want to make sure that the software or the technology you chose adheres to the highest level of encryption standards, such as 3DES, RSA, AES, Blowfish, etc.

-Zeb Ahmed

May 24, 2016

Streamlining the VMware licenses ordering process

IBM and VMware’s agreement (announced in February) enables enterprise customers to extend their existing on-premises workloads to the cloud—specifically, the IBM Cloud. Customers can now leverage VMware technologies with IBM’s worldwide cloud data centers, giving them the power to scale globally without incurring CAPEX and reducing security risks.

So what does this mean to customers’ VMware administrators? They can quickly realize cost-effective hybrid cloud characteristics by deploying into SoftLayer’s enterprise-grade global cloud platform (VMware@SoftLayer). One of these characteristics is that vSphere workloads and catalogs can be provisioned onto VMware vSphere environments within SoftLayer's data centers without modification to VMware VMs or guests. The use of a common vSphere hypervisor and management/orchestration platform make these deployments possible.

vSphere implementations on SoftLayer also enable utilization of other components. Table 1 contains a list of VMware products that are now available for ordering through the SoftLayer customer portal. Note that prices are subject to change. Visit VMware Solutions for the most current pricing.

Product Name

Customer List Price

VMware vCenter Server Standard

Included with vSphere

VMware vSphere Enterprise Plus

Starting at $85 per processor per month

VMware vRealize Suite [Includes VMware vRealize (Standard Edition), vRealize Log Insight, and vRealize Automation (Standard Edition)]

Starting at $48 per processor per month

VMware vRealize Operations Enterprise Edition

Starting at $68 per processor per month

VMware vRealize Operations Advanced Edition

Starting at $33 per processor per month

VMware vRealize Automation Enterprise

Starting at $150 per processor per month

VMware vRealize Automation Advanced

Starting at $75 per processor per month

VMware NSX-V

Starting at $118 per processor per month

VMware Integrated OpenStack (VIO)

Starting at $11 per processor per month

Virtual SAN Standard Tier 1 (0-20 TB)

Contact SoftLayer Sales for pricing

Virtual SAN Standard Tier 2 (21-64 TB)

Contact SoftLayer Sales for pricing

Virtual SAN Standard Tier 3 (65-124 TB)

Contact SoftLayer Sales for pricing

VMware Site Recovery Manager (SRM)

Starting at $257 per processor per month

Table 1. VMware products available in the SoftLayer Customer Portal

Use the following steps to order licenses for the VMware products listed in Table 1:

  1. Log in to the SoftLayer customer portal.
  2. Click Devices > Managed > VMware Licenses.

Steps to VMware Licenses page

Figure 1. Steps to VMware Licenses page

  1. Click on Order VMware Licenses in the top right-hand corner of the VMware Licenses page.

Order VMware Licenses

Figure 2. Order VMware Licenses

  1. To list the VMware products and number of CPUs for the licenses you want to order (Figure 3), click the drop-down list under Clicking Add License.

Note: VMware vSphere Enterprise Plus (ESXi 6.0) cannot be ordered through this process. You must still order it as a requested OS when you order your bare metal server.



Select the VMware product and number of CPUs

Figure 3. Select the VMware product and number of CPUs

  1. View the price of the VMware product you selected on the far right of the screen.

View your selection before continuing the ordering process

Figure 4. View your selection before continuing the ordering process

  1. Click Continue to order the licenses or you can click Add License to add additional licenses.

Once you click Continue, you are taken back to the VMware Licenses page, which displays your VMware product(s) and license key(s).

List of VMware products and license keys

Figure 5. List of VMware products and license keys

  1. Download the Install Files from the link on this page. You will need to have an SSL connection to the SoftLayer private network to be able to access the download page.
  2. Download the correct VMware product(s) and manually install them into your vSphere environment.

 

- Kerry Staples

Categories: 
May 19, 2016

Bringing the power of GPUs to cloud

The GPU was invented by NVIDIA back in 1999 as a way to quickly render computer graphics by offloading the computational burden from the CPU. A great deal has happened since then—GPUs are now enablers for leading edge deep learning, scientific research, design, and “fast data” querying startups that have ambitions of changing the world.

That’s because GPUs are very efficient at manipulating computer graphics, image processing, and other computationally intensive high performance computing (HPC) applications. Their highly parallel structure makes them more effective than general purpose CPUs for algorithms where the processing of large blocks of data is done in parallel. GPUs, capable of handling multiple calculations at the same time, also have a major performance advantage. This is the reason SoftLayer (now part of IBM Cloud) has brought these capabilities to a broader audience.

We support the NVIDIA Tesla Accelerated Computing Platform, which makes HPC capabilities more accessible to, and affordable for, everyone. Companies like Artomatix and MapD are using our NVIDIA GPU offerings to achieve unprecedented speed and performance, traditionally only achievable by building or renting an HPC lab.

By provisioning SoftLayer bare metal servers with cutting-edge NVIDIA GPU accelerators, any business can harness the processing power needed for HPC. This enables businesses to manage the most complex, compute-intensive workloads—from deep learning and big data analytics to video effects—using affordable, on-demand computing infrastructure.

Take a look at some of the groundbreaking results companies like MapD are experiencing using GPU-enabled technology running on IBM Cloud. They’re making big data exploration visually interactive and insightful by using NVIDIA Tesla K80 GPU accelerators running on SoftLayer bare metal servers.

SoftLayer has also added the NVIDIA Tesla M60 GPU to our arsenal. This GPU technology enables clients to deploy fewer, more powerful servers on our cloud while being able to churn through more jobs. Specifically, running server simulations are cut down from weeks or days to hours when compared to using a CPU-only based server—think of performance running tools and applications like Amber for molecular dynamics, Terachem for quantum chemistry, and Echelon for oil and gas.

The Tesla M60 also speeds up virtualized desktop applications. There is widespread support for running virtualized applications such as AutoCAD to Siemens NX from a GPU server. This allows clients to centralize their infrastructure while providing access to the application, regardless of location. There are endless use cases with GPUs.

With this arsenal, we are one step closer to offering real supercomputing performance on a pay-as-you-go basis, which makes this new approach to tackling big data problems accessible to customers of all sizes. We are at an interesting inflection point in our industry, where GPU technology is opening the door for the next wave of breakthroughs across multiple industries.

-Jerry Gutierrez

Pages

Subscribe to partner-marketplace