Tips And Tricks Posts

June 27, 2016

Disaster Recovery in the Cloud: Are You Prepared?

While the importance of choosing the right disaster recovery solution and cloud provider cannot be understated, having a disaster recovery runbook is equally important (if not more). I have been involved in multiple conversations where the customer’s primary focus was the implementation of the best-suited disaster recovery technology, but conversation regarding DR runbook was either missing completely or lacked key pieces of information. Today, my focus will be to lay out a frame work for what your DR runbook should look like.

“Eighty percent of businesses affected by a major incident either never re-open or close within 18 months.” (Source: Axa Report)

What is a disaster recovery runbook?

A disaster recovery runbook is a working document that outlines a recovery plan with all the necessary information required for execution of this plan. This document is unique to every organization and can include processes, technical details, personnel information, and other key pieces of information that may not be readily available during a disaster situation.

What should I include in this document?

As previously stated, a runbook is unique to every organization depending on the industry and internal processes, but there is standard information that applies to all organizations and should be included in every runbook. Below is a list of the most important information:

  • Version control and change history of the document.
  • Contacts with titles, phone numbers, email addresses, and job responsibilities.
  • Service provider and vendor list with point of contact, phone numbers, and email addresses.
  • Access Control List: application/system access and physical access to offices/data centers.
  • Updated organization chart.
  • Use case scenarios based on DR testing, i.e., what to do in the event of X, and the chain of events that must take place for recovery.
  • Alert and custom notifications/emails that need to be sent for a failure or DR event.
  • Escalation procedures.
  • Technical details and explanation of the disaster recovery solution (network layouts, traffic flows, systems and application inventory, backup configurations, etc.).
  • Application-based personnel roles and responsibilities.
  • How to revert back and failover/failback procedures.

How to manage and execute the runbook

Processes, applications, systems, and employees can all change on a daily basis. It is essential to update this information in the DR runbook on a regular basis to ensure the accuracy of the document.

All relevant employees should receive DR training and should be well informed of their roles and responsibilities in a DR event. They should be asked to take ownership of certain tasks, which should be well documented in the runbook.

In short, we all hope to avoid a disaster. But when it happens, we must be prepared to tackle it. I hope the information above will be helpful in taking the first step towards preparing a DR runbook. Please feel free to contact me for additional information or guidance.

-Zeb

 

June 23, 2016

Meet the Integrated IBM Cloud Platform: SoftLayer and Bluemix

Did you know that you can complement your SoftLayer infrastructure with IBM Bluemix platform-as-a-service? (Read on—then put these ideas into practice with a special offer at the end.)

When you pair Bluemix with SoftLayer, you can buy, build, access, and manage the production of scalable environments and applications by using the infrastructure and application services together. 

Whether you need insight on the effectiveness of a multimedia campaign, need to process vast amounts of data in real-time, or want to deploy websites and web content for millions of users, you can create a better experience for your customers by combining the power of your SoftLayer infrastructure with Bluemix.

Bluemix solutions and services allow you to:

  • Optimize campaigns in real-time based on customer reaction using Watson Personality Insights and Insights for Twitter.
  • Run scalable analytics using Streaming Analytics to retrieve results in seconds.
  • Improve outcomes with Watson Alchemy API and Retrieve and Rank paired with high performance bare metal servers.
  • Automate hundreds of daily web deployments using SoftLayer and Bluemix APIs.
  • Securely store, analyze, and process big data using Cloudant database service with Apache Spark.

You can see the value of an integrated SoftLayer/Bluemix experience by looking at insights and cognitive, big data and analytics, and web applications.

Insights and Cognitive

Forty-four percent of organizations say customer experience will be the primary way they seek to differentiate from competitors.

The scenario: Marketing organizations and advertising agencies want to release a large, worldwide marketing campaign, complete with embedded ads. With the explosive growth of mobile, social, and video, those ads are often image- and video-intensive. Not only are these enterprises worried about how to run such a high-performing workload where customer data needs to stay in-country, but they have no idea how effective their campaign will be—and whether those receiving it are the users they’re trying to target—until it’s too late.

The solution: A media-rich campaign workload can run on high-performing bare metal servers in SoftLayer data centers. Cognitive services are added to understand in real-time the impact of campaign and target customers, whose personal data is stored in proximity to the user.

  • SoftLayer bare metal servers run media-rich (video, image) campaign workloads.
  • Bluemix’s Insights for Twitter service is used to understand in real-time the impact of the campaign.
  • Watson’s Personality Insights allows you to see, based on 40 calculated attributes, if users viewing ads match the target customers.
  • Globally diverse block storage enables data storage across the world.

Personality portrait

Big Data and Analytics

The value of data decreases over time. On average, it takes two weeks to analyze social data.

The scenario: Customers need to harness vast amounts of data in real-time. The problem is many data streams come too fast to store in a database for later analysis. Further, the analysis needs to be done NOW. From social media, consumer video, and audio, to security cameras, businesses could win or lose by being the first to discover essential patterns from these real-time feeds and act upon them.

The solution:  Customers can use Streaming Analytics and get results in seconds, not hours. Alchemy API and Retrieve and Rank services can improve decisions and outcomes all from bare metal servers with scalable IBM Containers.

•       Streaming Analytics can run scalable analytics solutions and get results in seconds, not hours.

•       Patterns that are found can be stored with the associated stream content in object storage and transferred around the world using CDN to be co-located with their customers.

•       Watson’s Retrieve and Rank service can improve decisions and outcomes.

•       Run services from high-performing, low-latency bare metal servers that can scale as activity swells using IBM Containers.

Hadoop, data warehouse, NOSQL diagram

Web Application

It can take several weeks for a DBMS instance to be provisioned for a new development project, which limits innovation and agility.

The scenario: Customers deploying websites and web content for millions of users need fast infrastructure and services so they can focus on their users, not spend their time managing servers and infrastructure. This is especially true for commerce sites that need to be constantly available for orders. These also need a reliable database to securely store the data. The problem is these customers do not want to manage their database, and need an infrastructure provider that is worldwide, reliable, and screaming fast.

The solution: Customers can host web applications on VMs and bare metal with a broad range of needs, including sites that require deep data analysis. Apache Spark can be used to spin up in-memory computing to analyze Cloudant data and return results 100x faster to the user.

  • Automate hundreds of web deployments using SoftLayer APIs.
  • Cloudant DB offloads DB management, reallocates budget from admins to application developers.
  • Apache Spark analyzes Cloudant data 100 times faster using in-memory computing cluster.
  • Bare metal servers provide a high-performing environment for the most stringent requirements.
  • Load balancers manage traffic, helping to ensure uptime.
  • Virtual servers with the Auto Scale service grow and shrink environment to consistently meet needs of application without unnecessary expenditures.
  • Object storage open APIs speed worldwide delivery via CDN.

Cloudant diagram

Exciting Offer

Put these ideas into practice by trying Bluemix today. To get you started, we are offering you a $200 Bluemix spending credit for 30 days when you link your SoftLayer account with a Bluemix account. When you link your Bluemix and SoftLayer billing accounts, you receive a $200 credit toward Bluemix usage. The credit must be used within 30 days of linking the accounts.

Follow these easy instructions to get started:  

  • Visit the SoftLayer customer portal and log into your account.
  • Open a ticket to request the ability to enable the ability to link your Bluemix account.
  • Once activated, the “Link a Bluemix Account” button will appear at the top of the SoftLayer customer portal page.
  • Click on the “Link a Bluemix Account” button. 
  • Follow the on-screen instructions to link your SoftLayer account to a Bluemix account.

This offer expires on December 30, 2016.

Learn More

Bluemix Intro Demo

Watson Personality Insights

Real Time Streaming Analysis

Hybrid Data Warehouse



 

-Thomas Recchia

June 20, 2016

VMware on SoftLayer Just Got Even Easier

SoftLayer customers have been bringing VMware workloads and VMware add-ons to the infrastructure as a service (IaaS) platform for years. With the roll-out of per-processor monthly licensing and the automation of vSphere and vCenter deployment, the provisioning process has never been easier. 

Now SoftLayer has taken the next step by allowing customers to order and manage VMware add-ons with the same per-processor monthly pricing model. To celebrate, the sales engineering team has updated KnowledgeLayer and added a new section focused on VMware 6, including step-by-step guides for getting started on the platform. VMware vSphere 6 Getting Started, for example, details how to get vSphere servers up and running. It gives a detailed instructions on how to create from scratch, what VLAN and IP addresses customer should use, and the recommended network structure.  

Let’s review what else is new.

SoftLayer has added the vCenter Server Appliance to the catalog to allow customers to fully scale their environments up on their own. We’ve also added instructions on how you can deploy vCenter as an appliance. For smaller environments, customers can still deploy vCenter as a Windows add-on and get up and running in under an hour.

To make the vCenter appliance and other add-ons possible, SoftLayer has enhanced the customer portal to allow customers to order and manage all VMware licensing add-ons in a simple panel. Customers use this system to order and manage licenses for vCenter Server Appliance, Virtual SAN, NSX-V, Site Recovery Manager, and vRealize Operations/Automation/Log Insight. Combined with speedy SoftLayer bare metal server provisioning times, customers can stand up or extend their VMware footprint across the globe in no time.

VMware NSX on SoftLayer is nothing new, but the capabilities of the latest version and the month-to-month pricing make it an option worth considering. Between the edge gateways and distributed networking enhancements, customers can build security and standardization into the platform that follows their workloads from server to server and site to site. Customers can span a private layer 2 domain across completely different locations by using a VXLAN overlay across a layer 3 routed network. This is particularly useful for disaster recovery and for bursting on-premises workloads out to SoftLayer. Customers also leverage NSX to isolate workloads in a multi-tenant environment without the need for additional VLANs from SoftLayer. VMware 6 NSX Getting Started is your first stop to learn about micro-segmentation and best practices with NSX at SoftLayer.

VMware Virtual SAN is our latest addition to the platform and provides customers with a great option for hosting mission-critical workloads on single-tenant infrastructure with software-defined storage (SDS). Customers can leverage common x86 compute available on SoftLayer to build reliable, high performance, and scalable dedicated storage pools. It was designed for performance (caching and local disk access), affordability (mixing solid state and capacity SATA drives), and supportability without the need for a storage architect. It is tightly integrated with vSphere administration and brings features like snapshots, linked clones, vSphere Replication, and vSphere APIs for data protection. 

If you have questions about VMware on the SoftLayer cloud, get in touch with our sales representatives on live chat or phone. They’ll be happy to help and can also coordinate a consultation with the SoftLayer sales engineering team if you need one. You may find some of your initial questions have already been answered in our VMware FAQ.

I’m also delighted to share some video tutorials our sales engineering team created, entitled, “Getting Started With VMware 6.0 (Parts 1, 2, 3, 4).” This series will give you examples of deploying VMware and get some of your initial questions answered.

With that said, why not start deploying your VMware solution—or expanding your current VMware workloads with feature rich add-ons? Now is the best time for you to take advantage of our promotion to spin up your VMware solution at SoftLayer. Ask a SoftLayer sales representative on live chat to get more details.

-Rick Ji

June 3, 2016

Mount SoftLayer Object Storage in a Docker Container

The popularity of Docker containers has many organizations wanting to host containers in their cloud environments. They’re looking for ways to “marry” their existing cloud storage options with Docker containers, which offers application portability. SoftLayer offers persistent data (structured or unstructured) with its object, file, and block storage.

Of the three storage options, object storage is usually more popular in the cloud world as a pay-as-you-go option. It provides persistent storage for numerous workloads with image, video, and audio files, such as mobile and web applications. Combine persistence with the power of Docker containers, and the result is a highly portable and flexible application platform on the cloud. I’d like to showcase mounting SoftLayer object storage inside a Docker container using Cloudfuse. This example can, of course, be extended for further automation of the mount process as needed.

The following are steps for mounting object storage to a Docker container:

  1. Know your SoftLayer object storage credentials, which can be retrieved from your SoftLayer account.
username (Your SoftLayer Object Store Username or password string)
api_key (Your SoftLayer API Key
authurl (Authorization URL of the data center where your object store is hosted)
  1. Install Docker on your host machine. Click here for installation instructions.

     
  2. Create a new folder named SLObjectStoreTest and make it your current directory.

     
  3. Copy the following into a file named Dockerfile and store it in the SLObjectStoreTest folder. You can also clone it from GitHub.
# Dockerfile : Mount SoftLayer Object Store inside a container
# Version 1.1
 
# Pull base images
FROM ubuntu
 
# Set working directory
WORKDIR /root
 
# Install Python
RUN apt-get update && \
apt-get -y upgrade
 
# Install pip
RUN apt-get install -y python-pip && \
pip install softlayer-object-storage
 
# Install cloudfuse
RUN apt-get install -y build-essential libcurl4-openssl-dev libxml2-dev libssl-dev libfuse-dev && \
apt-get install -y curl && \
curl -L https://github.com/redbo/cloudfuse/tarball/master > cloudfuse.tar && \
tar -xzvf cloudfuse.tar && \
apt-get install -y libjson0 libjson0-dev && \
cd redb* && \
./configure && \
make && \
make install
ENTRYPOINT [/bin/bash"]
 
# Build the Docker image from the Dockerfile
$docker build

You should see the Docker image being built. It will take a couple of minutes.

  1. Check that the image exists once it’s built by typing $docker images.

     
  2. Use the following command to spin up a Docker container from this image:

docker run –cap-add SYS_ADMIN –privileged –device /dev/fuse:/dev/fuse:mrw -i -t <imageid></imageid>

You should see the bash command of the Docker container.

  1. Create a new folder where the SoftLayer Object Storage should be mounted, e.g.,

mkdir /storage

  1. Create a new file in /root directory named .cloudfuse.
  2. Enter your SoftLayer object storage credentials (from Step 1) in the .cloudfuse file like below :
username (Your SoftLayer Object Store Username)
api_key (Your SoftLayer API Key or password string)
authurl (Authorization URL of the data center where your object store is hosted)
  1. Mount the SoftLayer object storage at /storage by running

cloudfuse /storage

You should see your SoftLayer object store mounted at /storage in your Docker container!

You can now configure this image to run your application, which can leverage this container—or use the container as a Docker volume container, composed with other containers running your application.

In case you want to experiment with an already built Docker image, you can pull it from the softlayerobjectstore_mount repository.

-Sravan K Yallapragada

Categories: 
May 27, 2016

Data Security and Encryption in the Cloud

In Wikipedia’s words, encryption is the process of encoding messages or information in such a way that only authorized parties can read it. On a daily basis, I meet customers from various verticals. Whether it is health care, finance, government, technology, or any other public or privately held entity, they all have specific data security requirements. More importantly, the thought of moving to a public cloud brings its own set of challenges around data security. In fact, data security is the biggest hurdle when making the move from a traditional on-premises data center to a public cloud.

One of the ways to protect your data is by encryption. There are a few ways to encrypt data, and they all have their pros and cons. By the end of this post, you will hopefully have a better understanding of the options available to you and how to choose one that meets your data security requirements.

Data “At Rest” Encryption

At rest encryption refers to the encryption of data that is not moving. This data is usually stored on hardware such as local disk, SAN, NAS, or other portable storage devices. Regardless of how the data gets there, as long as it remains on that device and is not transferred or transmitted over a network, it is considered at rest data.

There are different methodologies to encrypt at rest data. Let’s look at the few most common ones:

Disk Encryption: This is a method where all data on a particular physical disk is encrypted. This can be done by using SED (self-encrypting disk) or using a third party solutions from vendors like Vormetric, SafeNet, PrimeFactors, and more. In a public cloud environment, your data will most likely be hosted on a multitenant SAN infrastructure, so key management and the public cloud vendor’s ability to offer dedicated, local, or SAN spindles becomes critical. Moreover, keep in mind that using this encryption methodology does not protect data when it leaves the disk. This method may also be more expensive and may add management overhead. On the other hand, disk encryption solutions are mostly operating system agnostic, allowing for more flexibility.

File Level Encryption: File level encryption is usually implemented by running a third-party application within the operating system to encrypt files and folders. In many cases, these solutions create a virtual or a logical disk where all files and folders residing in it are encrypted. Tools like VeraCrypt (TrueCrypt’s successor), BitLocker, and 7-Zip are a few examples of file encryption software. These are very easy to implement and support all major operating systems.  

Data “In Flight” Encryption

Encrypting data in flight involves encrypting the data stream at one point and decrypting it at another point. For example, if you replicate data across two data centers and want to ensure confidentiality of this exchange, you would use data in flight encryption to encrypt the data stream as it leaves the primary data center, then decrypt it at the other end of the cable at the secondary data center. Since the data exchange is very brief, the keys used to encrypt the frames or packets are no longer needed after the data is decrypted at the other end so they are discarded—no need to manage these keys. Most common protocols used for in flight data encryption are IPsec VPN and TLS/SSL.

And there you have it. Hopefully by now you have a good understanding of the most commonly encryption options available to you. Just keep in mind that more often than not, at rest and in flight encryption are implemented in conjunction and complement each other. When choosing the right methodology, it is critical to understand the use case, application, and compliance requirements. You would also want to make sure that the software or the technology you chose adheres to the highest level of encryption standards, such as 3DES, RSA, AES, Blowfish, etc.

-Zeb Ahmed

May 3, 2016

Make the most of Watson Language Translation on Bluemix

How many languages can you speak (sorry, fellow geeks; I mean human languages, not programming)?

Every day people across the globe depend more and more on the Internet for their day-to-day activities, increasing the need for software to support multiple languages to accommodate the growing diversity of its users. If you work developing software, this means it is only a matter of time before you get tasked to translate your applications.

Wouldn't it be great if you could learn something with just a few key strokes? Just like Neo in The Matrix when he learns kung fu. Well, wish no more! I'll show you how to teach your applications to speak in multiple languages with just a few key strokes using Watson’s Language Translation service, available through Bluemix. It provides on-the-fly translation between many languages. You pay only for what you use and it’s consumable through web services, which means pretty much any application can connect to it—and it's platform and technology agnostic!

I'll show you how easy it is to create a PHP program with language translation capabilities using Watson's service.

Step 1: The client.

You can write your own code to interact with Watson’s Translation API, but why should you? The work is already done for you. You can pull in the client via Composer, the de-facto dependency manager for PHP. Make sure you have Composer installed, then create a composer.json file with the following contents:

composer.json file



We will now ask Composer to install our dependency. Execute one of the following commands from your CLI:



Installing the dependency



After the command finishes, you should have a 'vendor' directory created.

 

Step 2: The credentials.

From Bluemix, add the Language Translation service to your application and retrieve its credentials from the application's dashboard (shown below).



From Bluemix, add the Language Translation service to your application and retrieve its credentials from the application's dashboard.



 

Step 3: Put everything together.

At the same level where the composer.json file was created in Step 1, create a PHP file named test.php with the following contents:

test.php file





Save the file, buckle up, and execute it from the command line:

Execute test.php

 

Voilà! Your application now speaks French!

Explore other languages Watson knows and other cool features available through Watson's Language Translation service.

 

-Sergio







 

April 5, 2016

When in doubt with firewalls, “How Do I?” it out

Spring is a great time to take stock and wipe off the cobwebs at home. Within the sales engineering department at SoftLayer, we thought it was a good idea to take a deeper look at our hardware firewall products and revamp our support documentation. Whether you’re using our shared hardware firewalls, a dedicated hardware firewall, or the FortiGate Security Appliance, we have lots of new information to share with you on KnowledgeLayer.

One aspect we’re highlighting is a series of articles entitled, “How Do I?” within the Firewalls KnowledgeLayer node.  A "How Do I?" provides you with a detailed explanation about how to use a SoftLayer service or tool with the customer portal or API.  

For example, perhaps your cloud admin has just won the lottery, and has left the company. And now you need to reorient yourself with your company’s security posture in the cloud. Your first step might be to read “How Do I View My Firewalls?” which provides step-by-step instructions about how to view and manage your hardware firewalls at SoftLayer within the customer portal. If you discover you've been relying on iptables instead of an actual firewall to secure your applications, don't panic—ordering and securing your infrastructure with hardware firewalls can be done in minutes. Be sure to disable any accounts and API keys you no longer need within the Account tab. If you're new to SoftLayer and our portal, take a look at our on-demand webinars and training video series.

Now that you’ve identified the types of firewalls you have protecting your infrastructure, fel free to drill in to our updated articles that can help you out. If you’re running a dedicated hardware firewall and want to know how to manage it within the portal, this “How Do I?” article is for you. We’ve also tailored “How Do I?” entries for shared hardware firewalls and the FortiGate Security Appliance to help you beat the heat in no time. The SoftLayer customer portal also provides you with the ability to download firewall access logs in a CSV file. See for yourself how the Internet can truly be a hostile environment for a web-facing server. Every access attempt blocked by your firewall has saved your server from the work of processing software firewall rules, and keeps your application safer.  

We know that not all issues can be covered by how-to articles. To address that, we’ve also added a number of new entries to the Firewalls FAQ section. 

Keep the feedback coming! We’re here to help answer your sales-related technical questions. And be sure to check out our latest Sales Engineering Webinar: Creating a Digital Defense Plan with Firewalls. 

March 25, 2016

Be an Expert: Handle Drive Failures with Ease

Bare metal servers at SoftLayer employ best-in-class and industry proven SAS, SATA, or SSD disks, which are extensively tested and qualified in-house by the data center technicians. They are reliable and are enterprise grade hardware. However, single-point device failure cannot be neglected for unforeseen circumstances. HDD or device failures could happen for various reasons like power surge, mechanical/internal failure, drive firmware bugs, overheating, aging, etc. Though all efforts are made to mitigate these issues by selecting the best-in-class hard drives and pre-tested devices before making them available to customer, one could still run into drive failures occasionally.

Is having RAID protection just good enough?

Drive failures on dedicated bare metal servers may cause data loss, downtime, and service interruptions if they are not adequately deployed with a risk mitigation plan. As a first line of defense, users choose to have RAID at various levels. This may seem sufficient but may have the following problems:

  • Volume associated with the failed drive becomes degraded. This brings the VD performance below acceptable level. A degraded volume is most likely to disable write-back caching and further degrades write performance as well.
  • There is always a chance of another disk failing in the meantime. Unless a new disk is inserted and a rebuild is completed, a second disk failure could be catastrophic.    

Today a manual response to disk failure may take quite some time between when the user gets notified or becomes aware that the disks have failed and when a technician is involved to change the disks at the servers. During this time, a second disk failure is looming large over the user—while the system is in a degraded state.

To mitigate this risk, SoftLayer recommends that users always have a Global Hot Spare or Dedicated Hot Spare Disks wherever available on the bare metal servers. Users can choose one or more Hot Spare disks per server. This typically requires the user to earmark a drive slot for hot spares. It is recommended while ordering bare metal servers to take into consideration having empty drive slots for global hot spare drives.

Adding Hot Spare on a LSI MegaRAID Adaptor

Users can use WebBIOS utility or MegaRAID Storage Manager to add Hot Spare drive.

It is easiest to configure using MegaRAID Storage Manager Software,  available on the AVAGO website

Once logged in, you’ll will want to choose the Logical tab to view the unused disks under the “Unconfigured Drives.” Right-clicking and selecting “Assign Global Hot Spare” will make sure this drive is standby for any drive failure for any of the RAID volumes configured in the system. You can also choose to have Dedicated Hot Spare for specific volumes, which are critical. Figure 1 shows how to add a Global Hot Space using MSM. MegaRAID Storage Manager can also be used to access the server from a third-party machine or service laptops by providing the server IP address.

Figure 1 shows how to add a Global Hot Space using MSM.

You can also use the WebBios interface to add Hot Spare drives. This is done by breaking into the card BIOS at the early stage of booting by using Ctrl+R to access the BIOS Configuration Utility. As a prerequisite for accessing the KVM screen to see the boot time messages, you’ll need to VPN into the SoftLayer network and use KVM under the “Actions” dropdown in the customer portal.

Once inside the WebBIOS screen, access the “PD Mgmt” tab and choose a free drive. Pressing F2 on the highlighted drive will display a menu for making the drive as a Global Hot Spare. Figure 2 below provides more details for making a Hot Spare using BIOS interface. We recommend using virtual keyboard while navigating and issuing commands in the KVM viewer.

Figure 2 provides more details for making a Hot Spare using BIOS interface.

Adding Hot Spare Through Adaptec Adaptor

Adaptec also provides the Adaptec Storage Manager and a BIOS option to add Global Hot Spares.

The Adaptec Storage Manager comes preinstalled on SoftLayer servers for the supported chosen OS. This can also be downloaded for the specific Adaptec card from this link. After launching the Adaptec Storage Manager, users can select a specific available free drive and create a global hot spare drive as shown in Figure 3.

After launching the Adaptec Storage Manager, users can select a specific available free drive and create a global hot spare drive as shown in Figure 3.

Adaptec also provides a BIOS-based configuration utility that can be used to add a Hot Spare. To do this, you’ll need to break into the BIOS utility by using Ctrl+A at the early boot. After that, select the Global Hot Spares from the main menu to enter the drive selection page. Select a drive by pressing Insert and Enter to submit changes. Figure 4 below depicts the selection of a Global Hot Spare using BIOS configuration utility.

Figure 4 depicts the selection of a Global Hot Spare using BIOS configuration utility.

Using Hot Spares reduces a risk of further drive failures and also lowers the time the system remains in degraded state. We recommend  SoftLayer customers leverage these benefits on their bare metal servers to be better armed against drive failures.

-Subramanian

March 24, 2016

future.ready(): 7 Things to Check Off Your Big Data Development List

Frank Ketelaars, Big Data Technical Leader for Europe at IBM, offers a checklist that every developer should have pinned to their board when starting a big data project. Editor’s Note: Does your brain switch off when you hear industryspeak words like “innovation,” “transformation,” “leading edge,” “disruptive,” and “paradigm shift”? Go on, go ahead and admit it. Ours do, too. That’s why we’re launching the future.ready() series—consisting of blogs, podcasts, webinars, and Twitter chats— with content created by developers, for developers. Nothing fluffy, nothing buzzy. With the future.ready() series, we aim to equip you with tools and knowledge that you can use—not just talk and tweet about.

For the first edition, I’ve invited Frank Ketelaars, an expert in high volume data space, to walk us through seven things to check off when starting a big data development project.

-Michalina Kiera, SoftLayer EMEA senior marketing manager

 

This year, big data moves from a water cooler discussion to the to-do list. Gartner estimates that more than 75 percent of companies are investing or planning to invest in big data in the next two years.

I have worked on multiple high volume projects in industries that include banking, telecommunications, manufacturing, life sciences, and government, and in roles including architect, big data developer, and streaming analytics specialist. Based on my experience, here’s a checklist I put together that should give developers a good start. Did I miss anything? Join me on the Twitter chat or webinar to share your experience, ask questions, and discuss further. (See details below.)     

1. Team up with a person who has a budget and a problem you can solve.

For a successful big data project, you need to solve a business problem that’s keeping somebody awake at night. If there isn’t a business problem and a business owner—ideally one with a budget— your project won’t get implemented. Experimentation is important when learning any new technology. But before you invest a lot of time in your big data platform, find your sponsor. To do so, you’ll need to talk to everyone, including IT, business users, and management. Remember that the technical advantages of analytics at scale might not immediately translate into business value.

2. Get your systems ready to collect the data.

With additional data sources, such as devices, vehicles, and sensors connected to networks and generating data, the variety of information and transportation mechanisms has grown dramatically, posing new challenges for the collection and interpretation of data.

Big data often comes from sources outside the business. External data comes at you in a variety of formats (including XML, JSON, and binary), and using a variety of different APIs. In 2016, you might think that everyone is on REST and JSON, but think again: SOAP still exists! The variety of the data is the primary technical driver behind big data investments, according to a survey of 402 business and IT professionals by management consultancy NewVantage Partners[SM1] . From one day to the next, the API might change or a source might become unavailable.

Maybe one day we’ll see more standardization, but it won’t happen any time soon. For now, developers must plan to spend time checking for changes in APIs and data formats, and be ready to respond quickly to avoid service interruptions. And to expect the unexpected.

3. Make sure you have the right to use that data.

Governance is a business challenge, but it’s going to touch developers more than ever before—from the very start of the project. Much of the data they will be handling is unstructured, such as text records from a call center. That makes it hard to work out what’s confidential, what needs to be masked, and what can be shared freely with external developers. Data will need to be structured before it can be analyzed, but part of that process includes working out where the sensitive data is, and putting measures in place to ensure it is adequately protected throughout its lifecycle.

Developers need to work closely with the business to ensure that they can keep data safe, and provide end users with a guarantee that the right data is being analyzed and that its provenance can be trusted. Part of that process will be about finding somebody who will take ownership of the data and attest to its quality.

4. Pick the right tools and languages.

With no real standards in place yet, there are many different languages and tools used to collect, store, transport, and analyze big data. Languages include R, Python, Julia, Scala, and Go (plus the Java and C++ you might need to work with your existing systems). Technologies include Apache Pig, Hadoop, and Spark, which provide massive parallel processing on top of a file system without Hadoop. There’s a list of 10 popular big data tools here, another 12 here, and a round-up of 45 big data tools here. 451 Research has created a map that classifies data platforms according to the database type, implementation model, and technology. It’s a great resource, but its 18-color key shows how complex the landscape has become.

Not all of these tools and technologies will be right for you, but they hint at one way the developer’s core competency must change. Big data will require developers to be polyglots, conversant in perhaps five languages, who specialize in learning new tools and languages fast—not deep experts in one or two languages.

Nota bene: MapReduce and Pig are among the top highest paid technology skills in the US, and other big data skills are likely to be highly sought-after as the demand for them also grows. Scala is a relatively new functional programming language for data preparation and analysis, and I predict it will be in high demand in the near future.

5. Forget “off-the-shelf.” Experiment and set up a big data solution that fits your needs. 

You can think of big data analytics tools like Hadoop as a car. You want to go to the showroom, pay, get in, and drive away. Instead, you’re given the wheels, doors, windows, chassis, engine, steering wheel, and a big bag of nuts and bolts. It’s your job to assemble it.

As InfoWorld notes, DevOps tools can help to create manageable Hadoop solutions. But you’re still faced with a lot of pieces to combine, diverse workloads, and scheduling challenges.

When experimenting with concepts and technologies to solve a certain business problem, also think about successful deployment in the organization. The project does not stop after the proof.

6. Secure resources for changes and updates.

Apache Hadoop and Apache Spark are still evolving rapidly and it is inevitable that the behavior of components will change over time and some may get deprecated shortly after initial release. Implementing new releases will be painful, and developers will need to have an overview of the big data infrastructure to ensure that as components change, their big data projects continue to perform as expected.

The developer team must plan time for updates and deprecated features, and a coordinated approach will be essential for keeping on top of the change.

7. Use infrastructure that’s ready for CPU and I/O intensive workloads.

My preferred definition of big data (and there are many – Forbes found 12) is this: "Big data is when you can no longer afford to bring the data to the processing, and you have to do the processing where the data is."

In traditional database and analytics applications, you get the data, load it onto your reporting server, process it, and post the results to the database.

With big data, you have terabytes of data, which might reside in different places—and which might not even be yours to move. Getting it to the processor is impractical. Big data technologies like Hadoop are based on the concept of data locality—doing the processing where the data resides.

You can run Hadoop in a virtualized environment. Virtual servers don’t have local data, though, so the time taken to transport data between the SAN or other storage device and the server hurts the application’s performance. Noisy neighbors, unpredictable server speeds and contested network connections can have a significant impact on performance in a virtualized environment. As a result, it’s difficult to offer service level agreements (SLAs) to end users, which makes it hard for them to depend on your big data implementations.

The answer is to use bare metal servers on demand, which enable you to predict and guarantee the level of performance your application can achieve, so you can offer an SLA with confidence. Clusters can be set up quickly, so you can accelerate your project really fast. Because performance is predictable and consistent, it’s possible to offer SLAs to business owners that will encourage them to invest in the big data project and rely on it for making business decisions.

How can I learn more?

Join me in the Twitter chat and webinar (details below) to discuss how you’re addressing big data or have your questions answered by me and my guests.  

Add our Twitter chat to your calendar. It happens Thursday, March 31 at 1 p.m. CET. Use the hashtag #SLdevchat to share your views or post your questions to me.

Register for the webinar on Wednesday, Apr 20, at 5 p.m. to 6 p.m. CET.

 

About the author

Frank Ketelaars has been Big Data Technical Leader in Europe for IBM since August 2013. As an architect, big data developer, and streaming analytics specialist, he has worked on multiple high volume projects in banking, telecommunications, manufacturing, life sciences and government. He is a specialist in Hadoop and real-time analytical processing.


 

February 10, 2016

The Compliance Commons: Do you know our ISOs?

Editor’s note: This is the first of a three-part series designed to address general compliance topics and to answer frequently asked compliance questions.

How many times have you been asked by a customer if SoftLayer is ISO compliant?  Do you ever find yourself struggling for an immediate answer?  If so, you're not alone. 

ISO stands for International Organization for Standardization. The organization has published more than 19,000 international standards, covering almost all aspects of technology and business. If you have any questions about a specific ISO standard, you can search the ISO website. If you would like the full details of any ISO standard, an online copy of the standard can be purchased through their website. 

SoftLayer holds three ISO certifications, and we’re going after more. We offer industry standard best security practices relating to cloud infrastructure, including: 

ISO/IEC 27001: This certification covers the information security management process. It certifies that SoftLayer offers best security practices in the industry relating to cloud infrastructure as a service (IaaS). Going through this process and obtaining certification means that SoftLayer observes industry best practices in offering a safe and secure place to live in the cloud. It also means that our information security management practices adhere to strict, internationally recognized best practices.

ISO/IEC 27018: This certifies that SoftLayer follows the most stringent code of practice for protection of personally identifiable information (PII) in public clouds acting as PII processors. It establishes commonly accepted control objectives, controls, and guidelines for implementing measures to protect PII in accordance with the privacy principles in ISO/IEC 29100 for the public cloud computing environment. While not all of SoftLayer is public and while we have very distinct definitions for processing PII for customers, we decided to obtain the certification to solidify our security and privacy principles as robust.

ISO/IEC 27017: This is a code of practice for information security controls for cloud services.  It’s the global standard for cloud security practices—not only for what SoftLayer should do, but also for what our customers should do to protect information. SoftLayer’s ISO 27017 certification demonstrates our continued commitment to upholding the highest, most secure information security controls and applying them effectively and efficiently to our cloud infrastructure environment. The standard provides guidance in, but not limited to, the following areas:

  • Information Security
  • Human Resources
  • Asset Management
  • Access Control
  • Cryptography
  • Physical and Environmental Security
  • Operations Security
  • Communications Security
  • System Acquisition, Development & Maintenance
  • Supplier Relations
  • Incident Management
  • Business Continuity Management
  • Compliance
  • Network Security

How can SoftLayer’s ISO certification benefit me as a customer?

Customers can leverage SoftLayer’s certifications as long as it’s done in the proper manner. Customers cannot claim that they’re ISO certified just because they’re using SoftLayer infrastructure. That’s not how it works. SoftLayer’s ISO certifications may make it easier for customers to become certified because they can leverage our certification for the SoftLayer boundary. Our SOC2 report (available through our customer portal or sales team) describes our boundary in greater detail: the customers are not responsible for certifying what’s inside SoftLayer’s boundary.  

ISO File

How does SoftLayer prove its ISO compliance?

SoftLayer’s ISO Certificates of Registration are publicly available on our website and on our third-party assessor’s website. By design, our ISO certificates denote that we conform to and meet all the applicable objectives of each standard. Since the ISO standards are steadfast and constant controls for everyone, we don’t offer our reports from the audits, but we can provide our certificates.

What SoftLayer data centers are applicable to the ISO certifications?

All of them! Each ISO certificate is applicable to every one of our data centers, in the U.S. and internationally. SoftLayer obtained ISO certifications on every one of our facilities because we operate with consistency across the globe. When a new SoftLayer data center comes online, there is some lag time between opening and certification because we need to be reviewed by our third-party assessor and have operational evidence available to support our data center certification. But as soon as we obtain the certifications, we’ll make them available.

Visit www.softlayer.com/compliance for a full list of our certifications and reports. They can also be found through the customer portal.

-Dana

 

Subscribe to tips-and-tricks