introductions

May 11, 2016

Adventures in Bluemix: Migrating to MQ Light

One of my pet projects at SoftLayer is looking at a small collection of fancy scripts that scan through all registered Internet domain names to see how many of them are hosted on SoftLayer’s infrastructure. There are a lot of fun little challenges involved, but one of the biggest challenges is managing the distribution of work so that this scan doesn’t take all year. Queuing services are great for task distribution, and for my initial implementation I decided to give running a RabbitMQ instance a try, since at the time it was the only queuing service I was familiar with. Overall, it took me about a week and one beefy server to go from “I need a queue,” to “I have a queue that is actually doing what I need it to.”

While what I had set up worked, looking back, there is a lot about RabbitMQ that I didn’t really have the time to figure out properly. Around the time I finished the first run of this project, Bluemix announced that its MQLight service would allow connections from non-Bluemix resources. So when I got some free time, I decided to move the project to a Bluemix-hosted MQ Light queue, and take some notes on how the migration went.

Project overview

To better understand how much work was involved, let me quickly explain how the whole “scanning through every registered domain for SoftLayer hosted domains” thing works.

There are three main moving parts in the project:

  1. The Parser, which is responsible for reading through zone files (which are obtained from the various registrars), filtering out duplicates, and putting nicely formatted domains into a queue.
  2. The Resolver, which is responsible from taking the nicely formatted domains from queue #1, looking up the domain’s IP address, and putting the result into queue #2.
  3. The Checker, which takes the domains from queue #2, checks to see if the domains’ IPs belong to SoftLayer or not, and saves the result in a database.

Each queue entry is a package of about 500 domains, which is roughly 200Kb of text data consisting of the domain and some meta-data that I used to see how well everything was performing. There are around 160 million domains I need to review, and resolving a single domain can take anywhere from .001 seconds to four seconds, so being able to push domains quickly through queues is very important.

Things to be aware of

Going into this migration, I made a lot of assumptions about how things worked that caused me grief. So if you are in a similar situation, here is what I wish someone had told me.

AMQP 1.0: MQLight implements the AMQP 1.0 protocol, which is great, because it is the newest and greatest. As everyone knows, newer is usually better. The problem is that my application was using the python-pika library to connect to RabbitMQ, both of which implement AMQP 0.9, which isn’t fully compatible with AMQP 1.0. The Python library I was using gave me a version error when trying to connect to MQ Light. This required a bit of refactoring of my code in order to get everything working properly. The core ideas are the same, but some of the specific API calls are slightly different.

Persistence: Messages sent to a MQ Light queue without active subscribers will be lost, which took me a while to figure out. The UI indicates when this happens, so this is likely just a problem of me not reading the documentation properly and assuming MQ Light worked like RabbitMQ.



Messages sent to a MQLight queue without active subscribers will be lost.

Threads: The python-mqlight library uses threads fairly heavily, which is great for performance, but it makes programming a little more thought intensive. Make sure you wait for the connection to initialize before sending any messages, and make sure all your messages have been sent in before exiting.

Apache Proton: MQ Light is built on the Apache Qpid Proton project, and the Python library python-mqlight also uses this.

Setting up MQ Light

Aside from those small issues I mentioned, MQ Light was really easy to set up and start using, especially when compared to running my own RabbitMQ instance.



MQLight was really easy to set up and start using, especially when compared to running my own RabbitMQ instance.

  1. Set up the MQ Light Service in Bluemix.
  2. Install the python-mqlight library (or whatever library supports your language of choice). There are a variety of MQ Light Libraries.
  3. Try the send/receive examples.
  4. Write some code.
  5. Watch the messages come in, and profit.

That’s all there is to it. As a developer, the ease with which I can set up services to try is one of the best things about Bluemix, with MQ Light making a great addition to its portfolio of services.

Some real numbers

After I re-factored my code to be able to use either the pika or python-mqlight libraries interchangeably, I ran a sample set of data through each library to see what impact they had on overall performance, and I was pleasantly surprised to see the results.

Doing a full run-through of all domains would take about seven hours, so I ran this test with only 10,364 domains. Below are the running times for each section, in seconds.

Local RabbitMQ

This server was running on a 4 core, 49G Ram VSI.

Parser: 0.054s

Resolver: 90.485s

Checker: 0.0027s

Bluemix MQLight

Parser: 1.593s

Resolver: 86.756s

Checker: 6.766s

Since I am using the free, shared tier of MQ Light, I was honestly expecting much worse results. Having only a few seconds increase in runtime was a really big win for MQ Light.

Overall, I was very pleased working with MQ Light, and I highly suggest it as a starting place for anyone wanting to check out queuing services. It was easy to set up, free to try out, and pretty simple once I started to understand the basics.

-Chris

Categories: 
May 5, 2016

Everything you need to know about IBM POWER8 on SoftLayer

SoftLayer provides industry-leading cloud Infrastructure as a Service from a growing number of data centers around the world. To enable clients to draw critical insights and make better decisions faster, now there’s even more good news—customers and partners can use and rely on the secure, flexible, and open platform of IBM POWER Systems, which have just become available in SoftLayer’s DAL09 data center.

POWER8 servers are built with a processor designed and optimized specifically for big data workloads combining compute power, cutting-edge memory bandwidth, and I/O in ways that result in increased levels of performance, resiliency, availability, and security.

IBM POWER systems were designed to run many of the most demanding enterprise applications, industry-specific solutions, relational database management systems, and high performance computing environments. POWER8 servers are an ideal system for Linux and support a vast ecosystem of OpenSource, ISV, and IBM SW Unit products, giving clients a single, industry-leading open architecture (IBM POWER) in which to store, retrieve, and derive value from the “gold mine” of next generation applications.

The new POWER8 servers available from SoftLayer offer an optimal hybrid cloud infrastructure to test new Linux workloads in a secure and isolated cloud environment with reduced risk. As clients explore newer use cases like advanced analytics, machine learning, and cognitive computing against the combination of vast amounts of both structured and unstructured data, POWER8 and SoftLayer are in a unique position to accelerate client value. This new offering will also continue to leverage the rapidly expanding community of developers contributing to the OpenPOWER ecosystem as well as thousands of independent software vendors that support Linux on Power applications.

With the explosive growth of both structured and unstructured data, it requires businesses to derive insights and change faster than ever to keep pace. The cloud enables you to do just that. Our new and unique solution pairs SoftLayer’s Network-Within-a-Network topology for true out-of-band access, an easy-to-use customer portal, and robust APIs for full remote access of all product and service management options—with the unique high performance technology from IBM POWER8 to help accelerate the creation and delivery of the next generation of IT solutions.    

For more details, visit our POWER8 servers page.

 

-Chuck Calio,  IBM Power Systems Growth Solution Specialist

Categories: 
Keywords:
Categories:
May 3, 2016

Make the most of Watson Language Translation on Bluemix

How many languages can you speak (sorry, fellow geeks; I mean human languages, not programming)?

Every day people across the globe depend more and more on the Internet for their day-to-day activities, increasing the need for software to support multiple languages to accommodate the growing diversity of its users. If you work developing software, this means it is only a matter of time before you get tasked to translate your applications.

Wouldn't it be great if you could learn something with just a few key strokes? Just like Neo in The Matrix when he learns kung fu. Well, wish no more! I'll show you how to teach your applications to speak in multiple languages with just a few key strokes using Watson’s Language Translation service, available through Bluemix. It provides on-the-fly translation between many languages. You pay only for what you use and it’s consumable through web services, which means pretty much any application can connect to it—and it's platform and technology agnostic!

I'll show you how easy it is to create a PHP program with language translation capabilities using Watson's service.

Step 1: The client.

You can write your own code to interact with Watson’s Translation API, but why should you? The work is already done for you. You can pull in the client via Composer, the de-facto dependency manager for PHP. Make sure you have Composer installed, then create a composer.json file with the following contents:

composer.json file



We will now ask Composer to install our dependency. Execute one of the following commands from your CLI:



Installing the dependency



After the command finishes, you should have a 'vendor' directory created.

 

Step 2: The credentials.

From Bluemix, add the Language Translation service to your application and retrieve its credentials from the application's dashboard (shown below).



From Bluemix, add the Language Translation service to your application and retrieve its credentials from the application's dashboard.



 

Step 3: Put everything together.

At the same level where the composer.json file was created in Step 1, create a PHP file named test.php with the following contents:

test.php file





Save the file, buckle up, and execute it from the command line:

Execute test.php

 

Voilà! Your application now speaks French!

Explore other languages Watson knows and other cool features available through Watson's Language Translation service.

 

-Sergio







 

April 26, 2016

Cloud. Ready-to-Wear.

It’s been five years since I started my journey with SoftLayer. And what a journey it has been—from being one of the first few folks in our Amsterdam office, to becoming part of the mega-family of IBMers; from one data center in Europe to six on this side of the pond and 40+ around the globe; from “Who is SoftLayer?” (or my favorite, “SoftPlayer”), to becoming a cloud environment fundamental for some of the biggest and boldest organizations worldwide.

But the most thrilling difference between 2016 and 2011 that I’ve been observing lately is a shift of the market’s perception of cloud, which matters are important to adopters, and the technology itself becoming mainstream.

Organizations of all sizes—small, medium, and large, while still raising valid questions around the level of control and security—are more often talking about challenges regarding managing the combined on-prem and shared environments, readiness of their legacy applications to migrate to cloud, and their staff competency to orchestrate the new architecture.

At Cloud Expo 2016 (the fifth one for the SoftLayer EMEA team), next to two tremendous keynotes given by Sebastian Krause, General Manager IBM Cloud Europe, and by Rashik Parmar, Lead IBM Cloud Advisor/Europe IBM Distinguished Engineer, we held a roundtable to discuss the connection between hybrid cloud and agile business. Moderated by Rashik Parmar, the discussion confirmed the market’s evolution: from recognizing cloud as technology still proving its value, to technology critical in gaining a competitive advantage in today’s dynamic economy.

Rashik’s guests had deep technology backgrounds and came from organizations of all sizes and flavors—banking, supply chain managements, ISV, publishing, manufacturing, MSP, insurance, and digital entertainment, to name a few. Most of them already have live cloud deployments, or they have one ready to go into production this year.

When it came to the core factors underlying a move into the cloud, they unanimously listed gaining business agility and faster time-to-market. For a few minutes, there was a lively conversation among the panelists about the cost and savings. They raised examples citing  poorly planned cloud implementations that were 20-30 percent more costly than keeping the legacy IT setup. Based on an example of a large Australian bank, Rashik urged companies to start the process of moving into cloud with a vigilant map of their own applications landscape before thinking about remodeling the architecture to accommodate cloud.

The next questions the panelists tackled pertained to the drivers behind building hybrid cloud environments, which included:

  • Starting with some workloads and building a business case based on their success; from there, expanding the solution organization-wide
  • Increasing the speed of market entry for new solutions and products
  • Retiring certain legacy applications on-prem, while deploying new ones on cloud
  • Regulatory requirements that demand some workloads or data to remain on-prem.

When asked to define “hybrid cloud,” Rashik addressed the highly ambiguous term by simply stating that it refers to any combination of software-defined environment and automation with traditional IT.

The delegates discussed the types of cloud—local, dedicated, and shared—and found it difficult to define who controls hybrid cloud, and who is accountable for what component when something goes wrong. There was a general agreement that many organizations still put physical security over the digital one, which is not entirely applicable in the world of cloud.

Rashik explored, from his experience, where most cases of migrating into cloud usually originate. He referred to usage patterns and how organizations become agile with hybrid IT. The delegates agreed that gaining an option of immediate burstability and removing the headache of optimal resource management, from hardware to internal talent, are especially important.

Rashik then addressed the inhibitors of moving into cloud—and here’s the part that inspired me to write this post. While mentions of security (data security and job security) and the control over the environment arose, the focus repeatedly shifted toward the challenges of applications being incompatible with cloud architecture, complicated applications landscape, and scarcity of IT professionals skilled in managing complex (hybrid) cloud environments.

This is a visible trend that demonstrates the market has left the cloud department store’s changing room, and ready not only to make the purchase, but “ready to wear” the new technology with a clear plan where, when, and with an aim to achieve specific outcomes.

The conversation ended with energizing insights about API-driven innovation that enables developers to assemble a wide spectrum of functions, as opposed to being “just a coder.” Other topics included cognitive computing that bridges digital business with digital intelligence, and platforms such as blockchain that are gaining momentum.

To think that not so long ago, I had to explain to the average Cloud Expo delegate what “IaaS” stand for. We’ve come a long way.

 

-Michalina

April 6, 2016

Cloudocracy: Cedato believes in showing the right ad to the right viewer

In the latest edition of our Cloudocracy series—which celebrates SoftLayer customers shaking up their industries—meet Cedato. Have you noticed video ads appearing more often over non-video content online? SoftLayer customer Cedato makes that possible. We sat down with Dvir Doron, Cedato’s CMO, to learn more.

SOFTLAYER: There’s something we’ve always wondered about online video, so perhaps you can help us out. Why are there so many cat videos?

DVIR DORON: I’ll start with a confession: I’ve never uploaded a video of my pets, my children, or any of my hobbies. At the same time, I know I’m an anomaly. Most people want to share their lives, experiences, and happy moments. Cats capture that. We talk about user generated content, and cat and baby videos drove viewership and content at first. I’m not sure that’s the case today. People have moved on. There are more “fail” videos of people falling over and doing crazy stuff now. They make me laugh. What can I say? I’m weak.

SL: Let’s talk about a strength! How are you shaking up the online advertising business?

DORON: People love video ads and they generate tremendous value, but a few years ago the industry was hitting a roadblock because there wasn’t enough advertising space. Then the market started to embrace what we call “in place” advertising, which enables us to place video ads on non-video content. With the shift to mobile, that created a huge challenge. You have issues with the format, streaming conventions, and standards, and things don’t work very well. On the one hand, there was a huge opportunity to increase the supply of ad space, which was hugely in demand. At the same time, there was a major technical issue to solve.

We were established in the middle of last year to offer a sophisticated software layer that enables publishers to run video ads on video and non-video content. Our platform chooses the ad that will load the fastest, matches the user’s interests, and generates the best value for the advertiser and publisher. As long as you keep everyone happy, they will keep coming back.

SL: There is something of a backlash against advertising now, though, with users increasingly installing ad blockers. How can the advertising industry win them over?

DORON: There are a lot of sites out there that offer a very poor experience, but people don’t realize that slow loading times and buffering are not necessarily because of content delivery issues, poor infrastructure, or site mechanics. It’s a result of poor monetization techniques. Websites are trying to show ads that will maximize their revenue but often the ad behind that is not effective. Sorry for the self-promotion, but I believe that if you show the right ad to the right viewer with the lowest possible latency, everyone wins. If the wait times are low, the experience will be good.

SL: That’s an interesting point. What would you say has been your biggest challenge as a startup in this market?

DORON: We were blessed with very rapid growth, so the challenge for us was to provide a scalable platform. We were soon serving billions of ads per month. We needed someone we could count on to be both scalable and elastic, all over the world. So we’ve partnered with SoftLayer from the very beginning. We were extremely happy with the people and the level of support we were getting. As a startup, we really need that extra bit of support.

SL: And we’ve been pleased to provide it! What are your plans for the future?

DORON: We’re looking at TV advertising. The ability to match an ad to a specific viewer is coming in the next couple of years. Not necessarily to broadcast TV, but it’s coming. We’re trying to find areas where it makes sense to connect the advertisers online with TV audiences.

SL: Your focus is usually on the bits between the TV programs. But if we gave you the chance to edit any film or TV show, what would you change?

DORON: I would change the ending of Lost. It was epic. I watched all seven seasons of it, and this was when there were about 20 episodes per season. No spoilers, but I’d change it to something more original.

 

-Michalina

 

Categories: 
Keywords:
Categories:
April 5, 2016

When in doubt with firewalls, “How Do I?” it out

Spring is a great time to take stock and wipe off the cobwebs at home. Within the sales engineering department at SoftLayer, we thought it was a good idea to take a deeper look at our hardware firewall products and revamp our support documentation. Whether you’re using our shared hardware firewalls, a dedicated hardware firewall, or the FortiGate Security Appliance, we have lots of new information to share with you on KnowledgeLayer.

One aspect we’re highlighting is a series of articles entitled, “How Do I?” within the Firewalls KnowledgeLayer node.  A "How Do I?" provides you with a detailed explanation about how to use a SoftLayer service or tool with the customer portal or API.  

For example, perhaps your cloud admin has just won the lottery, and has left the company. And now you need to reorient yourself with your company’s security posture in the cloud. Your first step might be to read “How Do I View My Firewalls?” which provides step-by-step instructions about how to view and manage your hardware firewalls at SoftLayer within the customer portal. If you discover you've been relying on iptables instead of an actual firewall to secure your applications, don't panic—ordering and securing your infrastructure with hardware firewalls can be done in minutes. Be sure to disable any accounts and API keys you no longer need within the Account tab. If you're new to SoftLayer and our portal, take a look at our on-demand webinars and training video series.

Now that you’ve identified the types of firewalls you have protecting your infrastructure, fel free to drill in to our updated articles that can help you out. If you’re running a dedicated hardware firewall and want to know how to manage it within the portal, this “How Do I?” article is for you. We’ve also tailored “How Do I?” entries for shared hardware firewalls and the FortiGate Security Appliance to help you beat the heat in no time. The SoftLayer customer portal also provides you with the ability to download firewall access logs in a CSV file. See for yourself how the Internet can truly be a hostile environment for a web-facing server. Every access attempt blocked by your firewall has saved your server from the work of processing software firewall rules, and keeps your application safer.  

We know that not all issues can be covered by how-to articles. To address that, we’ve also added a number of new entries to the Firewalls FAQ section. 

Keep the feedback coming! We’re here to help answer your sales-related technical questions. And be sure to check out our latest Sales Engineering Webinar: Creating a Digital Defense Plan with Firewalls. 

April 4, 2016

A deeper dive into using VMware on SoftLayer

 

IBM and VMware recently announced an expanded global strategic partnership that enables customers to operate a seamless and consistent cloud, spanning hybrid environments. VMware customers now have the ability to quickly provision new (or scale existing) VMware workloads to IBM Cloud. This helps companies retain the value of their existing VMware-based solutions while leveraging the growing footprint of IBM Cloud data centers worldwide.

IBM customers are now able to purchase VMware software in a flexible, cost-efficient manner to power their deployments on IBM’s bare metal hardware infrastructure service. They’ll also be able to take advantage of their existing skill sets, tools, and technologies versus having to purchase and learn new ones. New customers will have complete control of their VMware environment, allowing them to expand into new markets and reduce startup cost by leveraging SoftLayer’s worldwide network and data centers.

This new offering also allows customers access to the full stack of VMware products to build an end-to-end VMware solution that matches their current on-premises environment or create a new one. Leveraging NSX lets customers manage their SoftLayer network infrastructure and extend their on-premises environment into SoftLayer as well, letting them expand their current capacity while reducing startup capital.

Customers can currently purchase vSphere Enterprise Plus 6.0 from SoftLayer. The VMware software components in Table 1 will be available, for a la carte purchase, for individual SoftLayer bare metal servers by Q2 2016. All products listed will be billed on a per socket basis.

Table 1: VMware software components

Product Name

Version

Charge per

VMware vRealize Operations Enterprise Edition

6.0

CPU

VMware vRealize Operations Advanced Edition

6.0

CPU

VMware vRealize Operations Standard Edition

6.0

CPU

VMware vRealize Log Insight

3.0

CPU

VMware NSX-V

6.2

CPU

VMware Integrated OpenStack (VIO)

2.0

CPU

Virtual SAN Standard Tier I (0-20TB)

6.X

CPU

Virtual SAN Standard Tier II (21-64TB)

6.X

CPU

Virtual SAN Standard Tier III (65-124TB)

6.X

CPU

VMware Site Recovery Manager

6.1

CPU

VMware vRealize Automation Enterprise

6.X

CPU

VMware vRealize Automation Advanced

6.X

CPU

 

The following FAQs will help you better understand the IBM and VMware partnership:                                                                                                                                             

Q: What are you offering today? And how much does it cost?

A: Today, IBM offers vSphere Enterprise Plus 6.0, which includes vCenter and vCloud Connector. It’s currently available for $85 per CPU for single CPU, dual CPU, and quad CPU servers. The products listed in Table 1 will be available in Q2 2016.

Q: Is per-CPU pricing a change from how VMware software was offered before?

A: Yes, the CPU-based pricing is new, and is unique to IBM Cloud. IBM is currently the only cloud provider to offer this type of pricing for VMware software. CPU-based pricing allows customers to more accurately budget how much they spend for VMware software in the cloud.

Q: Can customers bring the licenses they already own and have acquired via an existing VMware license agreement (e.g., ELA)?

A: Customers can take advantage of the new pricing when purchasing the VMware software through the SoftLayer portal. Please contact your VMware sales representative to get approval if you plan on bringing the license you already own to IBM Cloud.

Q: Will you offer migration services?

A: Yes, migration services will be among the portfolio of managed services offerings we will make available. Details will be announced closer to the time of availability, which is later in 2016.

Q: What storage options are available for VMware environments on SoftLayer?

A: Customers can select from a diverse range of SoftLayer storage offerings and custom solutions depending on their requirements and preferences. Use the Select a Storage Option to use with VMware guide to determine the best storage option for your environment.

Q: Where can I find technical resources to learn more about VMware on SoftLayer?

A: There is extensive technical documentation available on KnowledgeLayer, including:

 

-Kerry Staples and Andreas Groth

March 25, 2016

Be an Expert: Handle Drive Failures with Ease

Bare metal servers at SoftLayer employ best-in-class and industry proven SAS, SATA, or SSD disks, which are extensively tested and qualified in-house by the data center technicians. They are reliable and are enterprise grade hardware. However, single-point device failure cannot be neglected for unforeseen circumstances. HDD or device failures could happen for various reasons like power surge, mechanical/internal failure, drive firmware bugs, overheating, aging, etc. Though all efforts are made to mitigate these issues by selecting the best-in-class hard drives and pre-tested devices before making them available to customer, one could still run into drive failures occasionally.

Is having RAID protection just good enough?

Drive failures on dedicated bare metal servers may cause data loss, downtime, and service interruptions if they are not adequately deployed with a risk mitigation plan. As a first line of defense, users choose to have RAID at various levels. This may seem sufficient but may have the following problems:

  • Volume associated with the failed drive becomes degraded. This brings the VD performance below acceptable level. A degraded volume is most likely to disable write-back caching and further degrades write performance as well.
  • There is always a chance of another disk failing in the meantime. Unless a new disk is inserted and a rebuild is completed, a second disk failure could be catastrophic.    

Today a manual response to disk failure may take quite some time between when the user gets notified or becomes aware that the disks have failed and when a technician is involved to change the disks at the servers. During this time, a second disk failure is looming large over the user—while the system is in a degraded state.

To mitigate this risk, SoftLayer recommends that users always have a Global Hot Spare or Dedicated Hot Spare Disks wherever available on the bare metal servers. Users can choose one or more Hot Spare disks per server. This typically requires the user to earmark a drive slot for hot spares. It is recommended while ordering bare metal servers to take into consideration having empty drive slots for global hot spare drives.

Adding Hot Spare on a LSI MegaRAID Adaptor

Users can use WebBIOS utility or MegaRAID Storage Manager to add Hot Spare drive.

It is easiest to configure using MegaRAID Storage Manager Software,  available on the AVAGO website

Once logged in, you’ll will want to choose the Logical tab to view the unused disks under the “Unconfigured Drives.” Right-clicking and selecting “Assign Global Hot Spare” will make sure this drive is standby for any drive failure for any of the RAID volumes configured in the system. You can also choose to have Dedicated Hot Spare for specific volumes, which are critical. Figure 1 shows how to add a Global Hot Space using MSM. MegaRAID Storage Manager can also be used to access the server from a third-party machine or service laptops by providing the server IP address.

Figure 1 shows how to add a Global Hot Space using MSM.

You can also use the WebBios interface to add Hot Spare drives. This is done by breaking into the card BIOS at the early stage of booting by using Ctrl+R to access the BIOS Configuration Utility. As a prerequisite for accessing the KVM screen to see the boot time messages, you’ll need to VPN into the SoftLayer network and use KVM under the “Actions” dropdown in the customer portal.

Once inside the WebBIOS screen, access the “PD Mgmt” tab and choose a free drive. Pressing F2 on the highlighted drive will display a menu for making the drive as a Global Hot Spare. Figure 2 below provides more details for making a Hot Spare using BIOS interface. We recommend using virtual keyboard while navigating and issuing commands in the KVM viewer.

Figure 2 provides more details for making a Hot Spare using BIOS interface.

Adding Hot Spare Through Adaptec Adaptor

Adaptec also provides the Adaptec Storage Manager and a BIOS option to add Global Hot Spares.

The Adaptec Storage Manager comes preinstalled on SoftLayer servers for the supported chosen OS. This can also be downloaded for the specific Adaptec card from this link. After launching the Adaptec Storage Manager, users can select a specific available free drive and create a global hot spare drive as shown in Figure 3.

After launching the Adaptec Storage Manager, users can select a specific available free drive and create a global hot spare drive as shown in Figure 3.

Adaptec also provides a BIOS-based configuration utility that can be used to add a Hot Spare. To do this, you’ll need to break into the BIOS utility by using Ctrl+A at the early boot. After that, select the Global Hot Spares from the main menu to enter the drive selection page. Select a drive by pressing Insert and Enter to submit changes. Figure 4 below depicts the selection of a Global Hot Spare using BIOS configuration utility.

Figure 4 depicts the selection of a Global Hot Spare using BIOS configuration utility.

Using Hot Spares reduces a risk of further drive failures and also lowers the time the system remains in degraded state. We recommend  SoftLayer customers leverage these benefits on their bare metal servers to be better armed against drive failures.

-Subramanian

March 24, 2016

future.ready(): 7 Things to Check Off Your Big Data Development List

Frank Ketelaars, Big Data Technical Leader for Europe at IBM, offers a checklist that every developer should have pinned to their board when starting a big data project. Editor’s Note: Does your brain switch off when you hear industryspeak words like “innovation,” “transformation,” “leading edge,” “disruptive,” and “paradigm shift”? Go on, go ahead and admit it. Ours do, too. That’s why we’re launching the future.ready() series—consisting of blogs, podcasts, webinars, and Twitter chats— with content created by developers, for developers. Nothing fluffy, nothing buzzy. With the future.ready() series, we aim to equip you with tools and knowledge that you can use—not just talk and tweet about.

For the first edition, I’ve invited Frank Ketelaars, an expert in high volume data space, to walk us through seven things to check off when starting a big data development project.

-Michalina Kiera, SoftLayer EMEA senior marketing manager

 

This year, big data moves from a water cooler discussion to the to-do list. Gartner estimates that more than 75 percent of companies are investing or planning to invest in big data in the next two years.

I have worked on multiple high volume projects in industries that include banking, telecommunications, manufacturing, life sciences, and government, and in roles including architect, big data developer, and streaming analytics specialist. Based on my experience, here’s a checklist I put together that should give developers a good start. Did I miss anything? Join me on the Twitter chat or webinar to share your experience, ask questions, and discuss further. (See details below.)     

1. Team up with a person who has a budget and a problem you can solve.

For a successful big data project, you need to solve a business problem that’s keeping somebody awake at night. If there isn’t a business problem and a business owner—ideally one with a budget— your project won’t get implemented. Experimentation is important when learning any new technology. But before you invest a lot of time in your big data platform, find your sponsor. To do so, you’ll need to talk to everyone, including IT, business users, and management. Remember that the technical advantages of analytics at scale might not immediately translate into business value.

2. Get your systems ready to collect the data.

With additional data sources, such as devices, vehicles, and sensors connected to networks and generating data, the variety of information and transportation mechanisms has grown dramatically, posing new challenges for the collection and interpretation of data.

Big data often comes from sources outside the business. External data comes at you in a variety of formats (including XML, JSON, and binary), and using a variety of different APIs. In 2016, you might think that everyone is on REST and JSON, but think again: SOAP still exists! The variety of the data is the primary technical driver behind big data investments, according to a survey of 402 business and IT professionals by management consultancy NewVantage Partners[SM1] . From one day to the next, the API might change or a source might become unavailable.

Maybe one day we’ll see more standardization, but it won’t happen any time soon. For now, developers must plan to spend time checking for changes in APIs and data formats, and be ready to respond quickly to avoid service interruptions. And to expect the unexpected.

3. Make sure you have the right to use that data.

Governance is a business challenge, but it’s going to touch developers more than ever before—from the very start of the project. Much of the data they will be handling is unstructured, such as text records from a call center. That makes it hard to work out what’s confidential, what needs to be masked, and what can be shared freely with external developers. Data will need to be structured before it can be analyzed, but part of that process includes working out where the sensitive data is, and putting measures in place to ensure it is adequately protected throughout its lifecycle.

Developers need to work closely with the business to ensure that they can keep data safe, and provide end users with a guarantee that the right data is being analyzed and that its provenance can be trusted. Part of that process will be about finding somebody who will take ownership of the data and attest to its quality.

4. Pick the right tools and languages.

With no real standards in place yet, there are many different languages and tools used to collect, store, transport, and analyze big data. Languages include R, Python, Julia, Scala, and Go (plus the Java and C++ you might need to work with your existing systems). Technologies include Apache Pig, Hadoop, and Spark, which provide massive parallel processing on top of a file system without Hadoop. There’s a list of 10 popular big data tools here, another 12 here, and a round-up of 45 big data tools here. 451 Research has created a map that classifies data platforms according to the database type, implementation model, and technology. It’s a great resource, but its 18-color key shows how complex the landscape has become.

Not all of these tools and technologies will be right for you, but they hint at one way the developer’s core competency must change. Big data will require developers to be polyglots, conversant in perhaps five languages, who specialize in learning new tools and languages fast—not deep experts in one or two languages.

Nota bene: MapReduce and Pig are among the top highest paid technology skills in the US, and other big data skills are likely to be highly sought-after as the demand for them also grows. Scala is a relatively new functional programming language for data preparation and analysis, and I predict it will be in high demand in the near future.

5. Forget “off-the-shelf.” Experiment and set up a big data solution that fits your needs. 

You can think of big data analytics tools like Hadoop as a car. You want to go to the showroom, pay, get in, and drive away. Instead, you’re given the wheels, doors, windows, chassis, engine, steering wheel, and a big bag of nuts and bolts. It’s your job to assemble it.

As InfoWorld notes, DevOps tools can help to create manageable Hadoop solutions. But you’re still faced with a lot of pieces to combine, diverse workloads, and scheduling challenges.

When experimenting with concepts and technologies to solve a certain business problem, also think about successful deployment in the organization. The project does not stop after the proof.

6. Secure resources for changes and updates.

Apache Hadoop and Apache Spark are still evolving rapidly and it is inevitable that the behavior of components will change over time and some may get deprecated shortly after initial release. Implementing new releases will be painful, and developers will need to have an overview of the big data infrastructure to ensure that as components change, their big data projects continue to perform as expected.

The developer team must plan time for updates and deprecated features, and a coordinated approach will be essential for keeping on top of the change.

7. Use infrastructure that’s ready for CPU and I/O intensive workloads.

My preferred definition of big data (and there are many – Forbes found 12) is this: "Big data is when you can no longer afford to bring the data to the processing, and you have to do the processing where the data is."

In traditional database and analytics applications, you get the data, load it onto your reporting server, process it, and post the results to the database.

With big data, you have terabytes of data, which might reside in different places—and which might not even be yours to move. Getting it to the processor is impractical. Big data technologies like Hadoop are based on the concept of data locality—doing the processing where the data resides.

You can run Hadoop in a virtualized environment. Virtual servers don’t have local data, though, so the time taken to transport data between the SAN or other storage device and the server hurts the application’s performance. Noisy neighbors, unpredictable server speeds and contested network connections can have a significant impact on performance in a virtualized environment. As a result, it’s difficult to offer service level agreements (SLAs) to end users, which makes it hard for them to depend on your big data implementations.

The answer is to use bare metal servers on demand, which enable you to predict and guarantee the level of performance your application can achieve, so you can offer an SLA with confidence. Clusters can be set up quickly, so you can accelerate your project really fast. Because performance is predictable and consistent, it’s possible to offer SLAs to business owners that will encourage them to invest in the big data project and rely on it for making business decisions.

How can I learn more?

Join me in the Twitter chat and webinar (details below) to discuss how you’re addressing big data or have your questions answered by me and my guests.  

Add our Twitter chat to your calendar. It happens Thursday, March 31 at 1 p.m. CET. Use the hashtag #SLdevchat to share your views or post your questions to me.

Register for the webinar on Wednesday, Apr 20, at 5 p.m. to 6 p.m. CET.

 

About the author

Frank Ketelaars has been Big Data Technical Leader in Europe for IBM since August 2013. As an architect, big data developer, and streaming analytics specialist, he has worked on multiple high volume projects in banking, telecommunications, manufacturing, life sciences and government. He is a specialist in Hadoop and real-time analytical processing.


 

March 23, 2016

Cloudocracy: Zumidian has seen the future—and it’s online gaming

Who makes the servers hum in SoftLayer data centers around the world?

The SLayers are the brains and muscle beneath the SoftLayer cloud—and you had a chance to meet some of us in last year’s Under the Infrastructure series. But each firewall has two sides! And those servers would not be humming if not for our brilliant customers.

Welcome to the Cloudocracy.

Whether you prefer to pass the bus journey with a puzzle game, or settle down for a tour of combat with your console, there’s a chance your gaming is managed by Zumidian. This week in our Cloudocracy series, we’d like to introduce you to CEO and President Nicolas Zumbiehl, who enjoys family time, cooking, and, of course, games!

Nicolas Zumbiehl, CEO of ZumidianSOFTLAYER: Are you more Angry Birds or Call of Duty?

NICOLAS ZUMBIEHL: Call of Duty. I prefer to play strategy games, though—like World of Tanks—rather than first-person shooters. It seems strange, because I work in the gaming industry, but I don’t have a single game on my iPhone or iPad. I’m a PC and console player at heart. Until last year, I had both an Xbox and PS3. Now I just have a PS4 at home. Today, I most enjoy playing games with my kids.

SL: How did you become president of Zumidian?

ZUMBIEHL: I founded the company! Previously, I was working for Hypernia, a game hosting company in Florida. Hosting is becoming a commodity business and I wanted to provide more value to gaming companies. I found a niche doing what we call “game management.” Basically, we run the whole game environment for our customers—not only the infrastructure, but the game itself, the payment gateways, the database, everything that makes up the game. We ensure it’s available for players 24/7 around the world.

SL: What does being president involve, day-to-day?

ZUMBIEHL: Zumidian is still a small company, with fewer than 20 people, so I mostly handle sales, business development, and relationships with suppliers like SoftLayer. I travel all over the world, visiting gaming trade shows and meeting customers. I like to travel to the US, where we have most of our operations, and I also like Asia. Singapore and Korea are my two favorite places there. Singapore I like for the city and the environment; in Korea, the people are really friendly and I have lots of friends and customers there.

SL: What changes has online gaming brought about?

ZUMBIEHL: In the past, you’d buy a game for $70. With online gaming, the model is free to play but if you want to progress quickly, you need to buy items. The majority of players still play for free, but the ones that really want to succeed pay for it—sometimes big money. There is a Clash of Clans player in Korea who spends $30,000 per month on the game.

SL: What tips would you offer to startups looking to launch their first online game?

ZUMBIEHL: The cost of acquiring customers is increasing more and more. It’s becoming very hard to succeed. Most of the popular games are made by four or five companies. Personally, I’m not sure I would invest in a game now.

Try to offer your game on all available platforms from the very start. In some countries, people prefer to play on smartphones and tablets, and in others they favor consoles.

You need to add content all the time. If you have a simple PC or console game, people will play it through in 20 to 30 hours and get bored. They’ll say it’s cool, but it sucks because after 30 hours, that’s it. To be successful, you need to think about how you’ll generate interest often.

If you want to go global, you have to put your game servers as close to users as possible, while still maintaining your back office servers in one location so you don’t have to duplicate them around the world. One of the reasons we work with SoftLayer is that you can pretty much build a global infrastructure.

SL: What changes do you think online gaming will bring about in the future?

ZUMBIEHL: Virtual reality will become more and more common, enabling you to really immerse yourself in the game. Gaming is increasingly going to be online. It will be more of a rental or service model, where you can play a game from way more devices, on almost anything that has a screen.

Learn more about Zumidian here.

-Michalina

Categories: 

Pages

Subscribe to introductions