Technology Posts

May 19, 2016

Bringing the power of GPUs to cloud

The GPU was invented by NVIDIA back in 1999 as a way to quickly render computer graphics by offloading the computational burden from the CPU. A great deal has happened since then—GPUs are now enablers for leading edge deep learning, scientific research, design, and “fast data” querying startups that have ambitions of changing the world.

That’s because GPUs are very efficient at manipulating computer graphics, image processing, and other computationally intensive high performance computing (HPC) applications. Their highly parallel structure makes them more effective than general purpose CPUs for algorithms where the processing of large blocks of data is done in parallel. GPUs, capable of handling multiple calculations at the same time, also have a major performance advantage. This is the reason SoftLayer (now part of IBM Cloud) has brought these capabilities to a broader audience.

We support the NVIDIA Tesla Accelerated Computing Platform, which makes HPC capabilities more accessible to, and affordable for, everyone. Companies like Artomatix and MapD are using our NVIDIA GPU offerings to achieve unprecedented speed and performance, traditionally only achievable by building or renting an HPC lab.

By provisioning SoftLayer bare metal servers with cutting-edge NVIDIA GPU accelerators, any business can harness the processing power needed for HPC. This enables businesses to manage the most complex, compute-intensive workloads—from deep learning and big data analytics to video effects—using affordable, on-demand computing infrastructure.

Take a look at some of the groundbreaking results companies like MapD are experiencing using GPU-enabled technology running on IBM Cloud. They’re making big data exploration visually interactive and insightful by using NVIDIA Tesla K80 GPU accelerators running on SoftLayer bare metal servers.

SoftLayer has also added the NVIDIA Tesla M60 GPU to our arsenal. This GPU technology enables clients to deploy fewer, more powerful servers on our cloud while being able to churn through more jobs. Specifically, running server simulations are cut down from weeks or days to hours when compared to using a CPU-only based server—think of performance running tools and applications like Amber for molecular dynamics, Terachem for quantum chemistry, and Echelon for oil and gas.

The Tesla M60 also speeds up virtualized desktop applications. There is widespread support for running virtualized applications such as AutoCAD to Siemens NX from a GPU server. This allows clients to centralize their infrastructure while providing access to the application, regardless of location. There are endless use cases with GPUs.

With this arsenal, we are one step closer to offering real supercomputing performance on a pay-as-you-go basis, which makes this new approach to tackling big data problems accessible to customers of all sizes. We are at an interesting inflection point in our industry, where GPU technology is opening the door for the next wave of breakthroughs across multiple industries.

-Jerry Gutierrez

April 26, 2016

Cloud. Ready-to-Wear.

It’s been five years since I started my journey with SoftLayer. And what a journey it has been—from being one of the first few folks in our Amsterdam office, to becoming part of the mega-family of IBMers; from one data center in Europe to six on this side of the pond and 40+ around the globe; from “Who is SoftLayer?” (or my favorite, “SoftPlayer”), to becoming a cloud environment fundamental for some of the biggest and boldest organizations worldwide.

But the most thrilling difference between 2016 and 2011 that I’ve been observing lately is a shift of the market’s perception of cloud, which matters are important to adopters, and the technology itself becoming mainstream.

Organizations of all sizes—small, medium, and large, while still raising valid questions around the level of control and security—are more often talking about challenges regarding managing the combined on-prem and shared environments, readiness of their legacy applications to migrate to cloud, and their staff competency to orchestrate the new architecture.

At Cloud Expo 2016 (the fifth one for the SoftLayer EMEA team), next to two tremendous keynotes given by Sebastian Krause, General Manager IBM Cloud Europe, and by Rashik Parmar, Lead IBM Cloud Advisor/Europe IBM Distinguished Engineer, we held a roundtable to discuss the connection between hybrid cloud and agile business. Moderated by Rashik Parmar, the discussion confirmed the market’s evolution: from recognizing cloud as technology still proving its value, to technology critical in gaining a competitive advantage in today’s dynamic economy.

Rashik’s guests had deep technology backgrounds and came from organizations of all sizes and flavors—banking, supply chain managements, ISV, publishing, manufacturing, MSP, insurance, and digital entertainment, to name a few. Most of them already have live cloud deployments, or they have one ready to go into production this year.

When it came to the core factors underlying a move into the cloud, they unanimously listed gaining business agility and faster time-to-market. For a few minutes, there was a lively conversation among the panelists about the cost and savings. They raised examples citing  poorly planned cloud implementations that were 20-30 percent more costly than keeping the legacy IT setup. Based on an example of a large Australian bank, Rashik urged companies to start the process of moving into cloud with a vigilant map of their own applications landscape before thinking about remodeling the architecture to accommodate cloud.

The next questions the panelists tackled pertained to the drivers behind building hybrid cloud environments, which included:

  • Starting with some workloads and building a business case based on their success; from there, expanding the solution organization-wide
  • Increasing the speed of market entry for new solutions and products
  • Retiring certain legacy applications on-prem, while deploying new ones on cloud
  • Regulatory requirements that demand some workloads or data to remain on-prem.

When asked to define “hybrid cloud,” Rashik addressed the highly ambiguous term by simply stating that it refers to any combination of software-defined environment and automation with traditional IT.

The delegates discussed the types of cloud—local, dedicated, and shared—and found it difficult to define who controls hybrid cloud, and who is accountable for what component when something goes wrong. There was a general agreement that many organizations still put physical security over the digital one, which is not entirely applicable in the world of cloud.

Rashik explored, from his experience, where most cases of migrating into cloud usually originate. He referred to usage patterns and how organizations become agile with hybrid IT. The delegates agreed that gaining an option of immediate burstability and removing the headache of optimal resource management, from hardware to internal talent, are especially important.

Rashik then addressed the inhibitors of moving into cloud—and here’s the part that inspired me to write this post. While mentions of security (data security and job security) and the control over the environment arose, the focus repeatedly shifted toward the challenges of applications being incompatible with cloud architecture, complicated applications landscape, and scarcity of IT professionals skilled in managing complex (hybrid) cloud environments.

This is a visible trend that demonstrates the market has left the cloud department store’s changing room, and ready not only to make the purchase, but “ready to wear” the new technology with a clear plan where, when, and with an aim to achieve specific outcomes.

The conversation ended with energizing insights about API-driven innovation that enables developers to assemble a wide spectrum of functions, as opposed to being “just a coder.” Other topics included cognitive computing that bridges digital business with digital intelligence, and platforms such as blockchain that are gaining momentum.

To think that not so long ago, I had to explain to the average Cloud Expo delegate what “IaaS” stand for. We’ve come a long way.

 

-Michalina

April 5, 2016

When in doubt with firewalls, “How Do I?” it out

Spring is a great time to take stock and wipe off the cobwebs at home. Within the sales engineering department at SoftLayer, we thought it was a good idea to take a deeper look at our hardware firewall products and revamp our support documentation. Whether you’re using our shared hardware firewalls, a dedicated hardware firewall, or the FortiGate Security Appliance, we have lots of new information to share with you on KnowledgeLayer.

One aspect we’re highlighting is a series of articles entitled, “How Do I?” within the Firewalls KnowledgeLayer node.  A "How Do I?" provides you with a detailed explanation about how to use a SoftLayer service or tool with the customer portal or API.  

For example, perhaps your cloud admin has just won the lottery, and has left the company. And now you need to reorient yourself with your company’s security posture in the cloud. Your first step might be to read “How Do I View My Firewalls?” which provides step-by-step instructions about how to view and manage your hardware firewalls at SoftLayer within the customer portal. If you discover you've been relying on iptables instead of an actual firewall to secure your applications, don't panic—ordering and securing your infrastructure with hardware firewalls can be done in minutes. Be sure to disable any accounts and API keys you no longer need within the Account tab. If you're new to SoftLayer and our portal, take a look at our on-demand webinars and training video series.

Now that you’ve identified the types of firewalls you have protecting your infrastructure, fel free to drill in to our updated articles that can help you out. If you’re running a dedicated hardware firewall and want to know how to manage it within the portal, this “How Do I?” article is for you. We’ve also tailored “How Do I?” entries for shared hardware firewalls and the FortiGate Security Appliance to help you beat the heat in no time. The SoftLayer customer portal also provides you with the ability to download firewall access logs in a CSV file. See for yourself how the Internet can truly be a hostile environment for a web-facing server. Every access attempt blocked by your firewall has saved your server from the work of processing software firewall rules, and keeps your application safer.  

We know that not all issues can be covered by how-to articles. To address that, we’ve also added a number of new entries to the Firewalls FAQ section. 

Keep the feedback coming! We’re here to help answer your sales-related technical questions. And be sure to check out our latest Sales Engineering Webinar: Creating a Digital Defense Plan with Firewalls. 

March 25, 2016

Be an Expert: Handle Drive Failures with Ease

Bare metal servers at SoftLayer employ best-in-class and industry proven SAS, SATA, or SSD disks, which are extensively tested and qualified in-house by the data center technicians. They are reliable and are enterprise grade hardware. However, single-point device failure cannot be neglected for unforeseen circumstances. HDD or device failures could happen for various reasons like power surge, mechanical/internal failure, drive firmware bugs, overheating, aging, etc. Though all efforts are made to mitigate these issues by selecting the best-in-class hard drives and pre-tested devices before making them available to customer, one could still run into drive failures occasionally.

Is having RAID protection just good enough?

Drive failures on dedicated bare metal servers may cause data loss, downtime, and service interruptions if they are not adequately deployed with a risk mitigation plan. As a first line of defense, users choose to have RAID at various levels. This may seem sufficient but may have the following problems:

  • Volume associated with the failed drive becomes degraded. This brings the VD performance below acceptable level. A degraded volume is most likely to disable write-back caching and further degrades write performance as well.
  • There is always a chance of another disk failing in the meantime. Unless a new disk is inserted and a rebuild is completed, a second disk failure could be catastrophic.    

Today a manual response to disk failure may take quite some time between when the user gets notified or becomes aware that the disks have failed and when a technician is involved to change the disks at the servers. During this time, a second disk failure is looming large over the user—while the system is in a degraded state.

To mitigate this risk, SoftLayer recommends that users always have a Global Hot Spare or Dedicated Hot Spare Disks wherever available on the bare metal servers. Users can choose one or more Hot Spare disks per server. This typically requires the user to earmark a drive slot for hot spares. It is recommended while ordering bare metal servers to take into consideration having empty drive slots for global hot spare drives.

Adding Hot Spare on a LSI MegaRAID Adaptor

Users can use WebBIOS utility or MegaRAID Storage Manager to add Hot Spare drive.

It is easiest to configure using MegaRAID Storage Manager Software,  available on the AVAGO website

Once logged in, you’ll will want to choose the Logical tab to view the unused disks under the “Unconfigured Drives.” Right-clicking and selecting “Assign Global Hot Spare” will make sure this drive is standby for any drive failure for any of the RAID volumes configured in the system. You can also choose to have Dedicated Hot Spare for specific volumes, which are critical. Figure 1 shows how to add a Global Hot Space using MSM. MegaRAID Storage Manager can also be used to access the server from a third-party machine or service laptops by providing the server IP address.

Figure 1 shows how to add a Global Hot Space using MSM.

You can also use the WebBios interface to add Hot Spare drives. This is done by breaking into the card BIOS at the early stage of booting by using Ctrl+R to access the BIOS Configuration Utility. As a prerequisite for accessing the KVM screen to see the boot time messages, you’ll need to VPN into the SoftLayer network and use KVM under the “Actions” dropdown in the customer portal.

Once inside the WebBIOS screen, access the “PD Mgmt” tab and choose a free drive. Pressing F2 on the highlighted drive will display a menu for making the drive as a Global Hot Spare. Figure 2 below provides more details for making a Hot Spare using BIOS interface. We recommend using virtual keyboard while navigating and issuing commands in the KVM viewer.

Figure 2 provides more details for making a Hot Spare using BIOS interface.

Adding Hot Spare Through Adaptec Adaptor

Adaptec also provides the Adaptec Storage Manager and a BIOS option to add Global Hot Spares.

The Adaptec Storage Manager comes preinstalled on SoftLayer servers for the supported chosen OS. This can also be downloaded for the specific Adaptec card from this link. After launching the Adaptec Storage Manager, users can select a specific available free drive and create a global hot spare drive as shown in Figure 3.

After launching the Adaptec Storage Manager, users can select a specific available free drive and create a global hot spare drive as shown in Figure 3.

Adaptec also provides a BIOS-based configuration utility that can be used to add a Hot Spare. To do this, you’ll need to break into the BIOS utility by using Ctrl+A at the early boot. After that, select the Global Hot Spares from the main menu to enter the drive selection page. Select a drive by pressing Insert and Enter to submit changes. Figure 4 below depicts the selection of a Global Hot Spare using BIOS configuration utility.

Figure 4 depicts the selection of a Global Hot Spare using BIOS configuration utility.

Using Hot Spares reduces a risk of further drive failures and also lowers the time the system remains in degraded state. We recommend  SoftLayer customers leverage these benefits on their bare metal servers to be better armed against drive failures.

-Subramanian

March 24, 2016

future.ready(): 7 Things to Check Off Your Big Data Development List

Frank Ketelaars, Big Data Technical Leader for Europe at IBM, offers a checklist that every developer should have pinned to their board when starting a big data project. Editor’s Note: Does your brain switch off when you hear industryspeak words like “innovation,” “transformation,” “leading edge,” “disruptive,” and “paradigm shift”? Go on, go ahead and admit it. Ours do, too. That’s why we’re launching the future.ready() series—consisting of blogs, podcasts, webinars, and Twitter chats— with content created by developers, for developers. Nothing fluffy, nothing buzzy. With the future.ready() series, we aim to equip you with tools and knowledge that you can use—not just talk and tweet about.

For the first edition, I’ve invited Frank Ketelaars, an expert in high volume data space, to walk us through seven things to check off when starting a big data development project.

-Michalina Kiera, SoftLayer EMEA senior marketing manager

 

This year, big data moves from a water cooler discussion to the to-do list. Gartner estimates that more than 75 percent of companies are investing or planning to invest in big data in the next two years.

I have worked on multiple high volume projects in industries that include banking, telecommunications, manufacturing, life sciences, and government, and in roles including architect, big data developer, and streaming analytics specialist. Based on my experience, here’s a checklist I put together that should give developers a good start. Did I miss anything? Join me on the Twitter chat or webinar to share your experience, ask questions, and discuss further. (See details below.)     

1. Team up with a person who has a budget and a problem you can solve.

For a successful big data project, you need to solve a business problem that’s keeping somebody awake at night. If there isn’t a business problem and a business owner—ideally one with a budget— your project won’t get implemented. Experimentation is important when learning any new technology. But before you invest a lot of time in your big data platform, find your sponsor. To do so, you’ll need to talk to everyone, including IT, business users, and management. Remember that the technical advantages of analytics at scale might not immediately translate into business value.

2. Get your systems ready to collect the data.

With additional data sources, such as devices, vehicles, and sensors connected to networks and generating data, the variety of information and transportation mechanisms has grown dramatically, posing new challenges for the collection and interpretation of data.

Big data often comes from sources outside the business. External data comes at you in a variety of formats (including XML, JSON, and binary), and using a variety of different APIs. In 2016, you might think that everyone is on REST and JSON, but think again: SOAP still exists! The variety of the data is the primary technical driver behind big data investments, according to a survey of 402 business and IT professionals by management consultancy NewVantage Partners[SM1] . From one day to the next, the API might change or a source might become unavailable.

Maybe one day we’ll see more standardization, but it won’t happen any time soon. For now, developers must plan to spend time checking for changes in APIs and data formats, and be ready to respond quickly to avoid service interruptions. And to expect the unexpected.

3. Make sure you have the right to use that data.

Governance is a business challenge, but it’s going to touch developers more than ever before—from the very start of the project. Much of the data they will be handling is unstructured, such as text records from a call center. That makes it hard to work out what’s confidential, what needs to be masked, and what can be shared freely with external developers. Data will need to be structured before it can be analyzed, but part of that process includes working out where the sensitive data is, and putting measures in place to ensure it is adequately protected throughout its lifecycle.

Developers need to work closely with the business to ensure that they can keep data safe, and provide end users with a guarantee that the right data is being analyzed and that its provenance can be trusted. Part of that process will be about finding somebody who will take ownership of the data and attest to its quality.

4. Pick the right tools and languages.

With no real standards in place yet, there are many different languages and tools used to collect, store, transport, and analyze big data. Languages include R, Python, Julia, Scala, and Go (plus the Java and C++ you might need to work with your existing systems). Technologies include Apache Pig, Hadoop, and Spark, which provide massive parallel processing on top of a file system without Hadoop. There’s a list of 10 popular big data tools here, another 12 here, and a round-up of 45 big data tools here. 451 Research has created a map that classifies data platforms according to the database type, implementation model, and technology. It’s a great resource, but its 18-color key shows how complex the landscape has become.

Not all of these tools and technologies will be right for you, but they hint at one way the developer’s core competency must change. Big data will require developers to be polyglots, conversant in perhaps five languages, who specialize in learning new tools and languages fast—not deep experts in one or two languages.

Nota bene: MapReduce and Pig are among the top highest paid technology skills in the US, and other big data skills are likely to be highly sought-after as the demand for them also grows. Scala is a relatively new functional programming language for data preparation and analysis, and I predict it will be in high demand in the near future.

5. Forget “off-the-shelf.” Experiment and set up a big data solution that fits your needs. 

You can think of big data analytics tools like Hadoop as a car. You want to go to the showroom, pay, get in, and drive away. Instead, you’re given the wheels, doors, windows, chassis, engine, steering wheel, and a big bag of nuts and bolts. It’s your job to assemble it.

As InfoWorld notes, DevOps tools can help to create manageable Hadoop solutions. But you’re still faced with a lot of pieces to combine, diverse workloads, and scheduling challenges.

When experimenting with concepts and technologies to solve a certain business problem, also think about successful deployment in the organization. The project does not stop after the proof.

6. Secure resources for changes and updates.

Apache Hadoop and Apache Spark are still evolving rapidly and it is inevitable that the behavior of components will change over time and some may get deprecated shortly after initial release. Implementing new releases will be painful, and developers will need to have an overview of the big data infrastructure to ensure that as components change, their big data projects continue to perform as expected.

The developer team must plan time for updates and deprecated features, and a coordinated approach will be essential for keeping on top of the change.

7. Use infrastructure that’s ready for CPU and I/O intensive workloads.

My preferred definition of big data (and there are many – Forbes found 12) is this: "Big data is when you can no longer afford to bring the data to the processing, and you have to do the processing where the data is."

In traditional database and analytics applications, you get the data, load it onto your reporting server, process it, and post the results to the database.

With big data, you have terabytes of data, which might reside in different places—and which might not even be yours to move. Getting it to the processor is impractical. Big data technologies like Hadoop are based on the concept of data locality—doing the processing where the data resides.

You can run Hadoop in a virtualized environment. Virtual servers don’t have local data, though, so the time taken to transport data between the SAN or other storage device and the server hurts the application’s performance. Noisy neighbors, unpredictable server speeds and contested network connections can have a significant impact on performance in a virtualized environment. As a result, it’s difficult to offer service level agreements (SLAs) to end users, which makes it hard for them to depend on your big data implementations.

The answer is to use bare metal servers on demand, which enable you to predict and guarantee the level of performance your application can achieve, so you can offer an SLA with confidence. Clusters can be set up quickly, so you can accelerate your project really fast. Because performance is predictable and consistent, it’s possible to offer SLAs to business owners that will encourage them to invest in the big data project and rely on it for making business decisions.

How can I learn more?

Join me in the Twitter chat and webinar (details below) to discuss how you’re addressing big data or have your questions answered by me and my guests.  

Add our Twitter chat to your calendar. It happens Thursday, March 31 at 1 p.m. CET. Use the hashtag #SLdevchat to share your views or post your questions to me.

Register for the webinar on Wednesday, Apr 20, at 5 p.m. to 6 p.m. CET.

 

About the author

Frank Ketelaars has been Big Data Technical Leader in Europe for IBM since August 2013. As an architect, big data developer, and streaming analytics specialist, he has worked on multiple high volume projects in banking, telecommunications, manufacturing, life sciences and government. He is a specialist in Hadoop and real-time analytical processing.


 

March 4, 2016

Adventures with Bluemix

Keeping up with the rapid evolution of web programming is frighteningly difficult—especially when you have a day job. To ensure I don’t get left behind, I like to build a small project every year or so with a collection of the most buzzworthy technologies I can find. Nothing particularly impressive, of course, but just a collection of buttons that do things. This year I am trying to get a good grasp on “as a Service,” which seems to be everywhere these days. Hopefully this adventure will prove educational.

Why use services when I can do it myself?

The main idea behind “as a Service” is that somewhere out there in the cloud, someone has figured out how to do a particular task really well. This someone is willing to provide you access to that for a small service fee—thereby letting you, the developer, focus as much time as possible on your code and not so much time worrying about optimal configurations of things that you need to work efficiently.

SoftLayer is an Infrastructure as a Service (IaaS) provider, which is what will be the home for my little application—due in large part because I already have a ton of experience running servers myself.

I’m a big fan of Python, so I’m going to start programing with the Pyramids framework as the base for my new application. Like the “as a Service” offerings, programming frameworks and libraries exist to help the developer focus on their code and leverage the expertise of others for the auxiliary components.

To make everything pretty, I am going to use Bootstrap.js, which is apparently the de facto front-end library these days.

For everything else I want to use, there will be an attached Bluemix service. For the uninitiated, Bluemix is a pretty awesome collection of tools for developing and deploying code. At its core, Bluemix uses Cloud Foundry to provision cloud resources and deploy code. For now, I’m going to deploy my own code, but what I’m really interested in are the add-on services that I can just drop into my application and get going. The first service I want to try out is going to be Cloudant nosql, which is a managed couchDB instance with a few added features like a pretty neat dashboard.

Welcome to Bluemix

Combining Bluemix services with SoftLayer servers

One of the great things about services in Bluemix is that they can be provisioned in a standalone deployment—meaning Bluemix services can be used by any computer with an Internet connection and therefore, so can my SoftLayer servers. Since Bluemix services are deployed on SoftLayer hardware (in general, but there are some exceptions), the latency between SoftLayer servers and Bluemix services should be minimal, which is nice.

Creating a Cloudant service in Bluemix is as easy as hitting the Create button in the console. Creating a simple web application in Pyramid took a bit longer, but the quick tutorial helped me learn about all the cool things the Pyramid project can do. I also got to skip all the mess with SQLAlchemy, since I’m storing all the data in Cloudant. All that’s required is a sane ID system (I am using uuid) and some json. No need to get bogged down with a rigid table structure since Cloudant is a document store. If I want to change the data format, I just need to upload a new copy of the data, and a new revision of that document will be automatically created.

After cobbling together a basic application that can publish and edit content, all I had to do to make everything look like it was designed intentionally was to add a few bootstrap classes to my templates. And then I had a ready to use website!

Conclusion

Although making a web application is still as intensive as it’s always been, at least using technology in an “as a Service” fashion helps cut down on all the tertiary technologies you need to become an expert on to get anything to work. Even though the application I created here was pretty simple, I hope to expand it to include some of the more interesting Bluemix services to see what kind of Frankenstein application I can manage to produce. There are currently 100 Bluemix services, so I think the hardest part is going to be figuring out which one to use next.

-Chris

February 3, 2016

Use TShark to see what traffic is passing through your gateway

Many of SoftLayer’s solutions make excellent use of the Brocade vRouter (Vyatta) dedicated security appliance. It’s a true network gateway, router, and firewall for your servers in a SoftLayer data center. It’s also an invaluable trouble-shooting tool should you have a connectivity issue or just want to take a gander at your network traffic. Built into vRouter’s command line and available to you, is a full-fledged terminal-based Wireshark command line implementation—TShark.

TShark is fully implemented in vRouter. If you’re already familiar with using TShark, you know you can call it from the terminal in either configuration or operational mode.  You accomplish this by prefacing a command with sudo; making the full command sudo tshark – flags.

tshark graphic

For those of us less versed in the intricacies of Wireshark and its command line cousin, here are a couple of useful examples to help you out.

One common flag I use in nearly every capture is –i (and as a side note, for those coming from a Microsoft Windows background, the flags are case sensitive). -i is a specific interface on which to capture traffic and immediately helps to cut down on the amount of information unrelated to the problem at hand. If you don’t set this flag, the capture will default to “the first non-loopback address;” or in the case of vRouter on SoftLayer, Bond0. Additionally, if you want to trace a packet and reply, you can set –i any to watch or capture traffic through all the interfaces on the device.

The second flag that I nearly always use to define a capture filter is –f, which defines a filter to match traffic against. The only traffic that matches this pattern will be captured. The filter uses the standard Wireshark syntax. Again, if you’re familiar with Wireshark, you can go nuts; but here are a few of the common filters I frequently use to help you get started:

  • host 8.8.8.8 will match any traffic to or from the specified host. In this case, the venerable Google DNS servers. 
  • net 8.8.8.0/24 works just like host, but for the entire network specified, in case you don’t know the exact host address you are looking for.
  • dst and src are useful if you want to drill down to a specific flow or want to look at just the incoming or outgoing traffic. These filters are usually paired with a host or net to match against.
  • port lets you specify a port to capture traffic, like host and net. Used by itself, port will match both source and destination port. In the case of well-known services, you can also define the port by the common name, i.e., dns.  

One final cool trick with the –f filter is the and and the negation not. They let you combine search terms and specifically exclude traffic in order to create a very finely tuned capture for your needs.

If you want to capture to a file to share with a team or to plug into more advanced analysis tools on another system, the –w flag is your friend. Without -w, the file will behave like a tcpdump and the output will appear in your terminal session. If you want to load the file into Wireshark or another packet analyzer tool you should make sure to add the –F flag to specify the file format. Here is an example:

Vyatta# sudo tshark –i Bond0 –w testcap.pcap –F pcap –f ‘src 10.128.3.5 and not port 80’

The command will capture on Bond0 and output the capture to a .pcap file called testcap.pcap in the root directory of the file system. It will match only traffic on bond0 from 10.128.3.5 that is not source or destination port 22. While that is a bit of a mouthful to explain, it does capture a very well defined stream! 

Here is one more example:

Vyatta#sudo tshark –I any –f ‘host 10.145.23.4 and not ssh’

This command will capture traffic to the terminal that is to or from the specified IP (10.145.23.4) that is not SSH. I frequently use this filter, or one a lot like it, when I am SSHed into a host and want to get a more general idea of what it is doing on the network. I don’t care about ssh because I know the cause of that traffic (me!), but I want to know anything else that’s going to or from the host.

This is all very much the tip of the iceberg; you can find a lot more information at the TShark main page. Hopefully these tips help out next time you want to see just what traffic is passing through your gateway.

- Jeff 

 

January 22, 2016

Using Cyberduck to Access SoftLayer Object Storage

SoftLayer object storage provides a low cost option to store files in the cloud. There are three primary methods for managing files in SoftLayer object storage: via a web browser, using the object storage API, or using a third-party application. Here, we’ll focus on the third-party application method, demonstrating how to configure Cyberduck to perform file uploads and downloads. Cyberduck is a free and open source (GPL) software package that can be used to connect to FTP, SFTP, WebDAV, S3, or any OpenStack Swift-based object storage such as SoftLayer object storage.

Download and Install Cyberduck

You can download Cyberduck here, with clients for both Windows and Mac. After the installation is complete, download the profile for SoftLayer object storage here. Choose any of the download links under the Connecting section; preconfigured locations won’t matter as the settings will be modified later.

Once the profile has been downloaded, it needs to be modified to allow the hostname to be changed. Open the downloaded file (e.g. Softlayer (Amsterdam).cyberduckprofile) in a text editor. Locate the Hostname Configurable key (<key>Hostname Configurable</key>), and change the XML tag following that from <false/> to <true/>. Once this change has been made, there are two options to load the configuration file: Move the file to the profiles directory where Cyberduck is installed (on Windows this will be C:\Program Files (x86)\Cyberduck\profiles by default), or double-click on the profile, and Cyberduck will add the profile.

Configure Cyberduck to Work with SoftLayer

Now that Cyberduck has been installed, it needs to be configured to connect to object storage in SoftLayer. You can do this by creating a bookmark in Cyberduck. With Cyberduck open, click on Bookmark in the main menu bar, then New Bookmark in the dropdown menu.

In the dropdown box at the top of the Bookmark window, select SoftLayer Object Storage (Name of Location).

In the dropdown box at the top of the Bookmark window, select SoftLayer Object Storage (Name of Location). Depending on the profile that was downloaded, the location may be different. When the SoftLayer profile has been selected, the configurable options for that profile will be displayed. Enter a nickname that will identify the object storage location.

Next, depending on which data center will store the objects, the server option in Cyberduck may need to be changed. To find out which server should be specified, open a web browser and log into the SoftLayer portal. Once in the portal click on Storage then Object Storage. Select the object storage account that will be used for this connection.

If no accounts exist, a new object storage account can be ordered by using the Order Object
Storage link located in the upper right-hand corner. After selecting the account, select the data center where the object storage will reside.

When the Object Storage page loads, there will be a View Credentials link under the object storage container dropdown box in the upper left section of the screen.

Clicking on that link will bring up a dialog box that contains the information necessary for creating a connection in Cyberduck. Because SoftLayer has both public and private networks, there are two authentication endpoints available. The setup for each endpoint is the same, but a VPN connection to the SoftLayer private network is necessary in order to use the private endpoint.

Here, we will be using the public endpoints. Select the server address for the public endpoint (see the blue highlighted text) and enter it into the server text box in Cyberduck.

Next, select the username. It will be in the format:

object_storage_account_name:softlayer_user_name.

Then enter it into the Username text box. (Make note of the API Key, it will be used later.)

Once those options have been set (Nickname, Server, and Username), close the new bookmark window. In the main Cyberduck window, you should see the newly created bookmark listed. Double-click on it to connect to the SoftLayer object storage.

At this point, Cyberduck will prompt for the API key. Use the API key noted above and Cyberduck will connect to SoftLayer object storage. Uploading files can be accomplished by selecting the files and dragging them to the Cyberduck window. Downloading can be accomplished by selecting a file in Cyberduck and dragging it to the local folder where it will be downloaded.

-Bryan Bush

January 8, 2016

A guide to Direct Link connectivity

So you’ve got your infrastructure running on SoftLayer, but you find yourself wishing for a more direct way to connect your on-premises or co-located infrastructure to your SoftLayer cloud infrastructure—with higher bandwidth and lower latency. And you also think the Internet just isn’t good enough when we’re talking VPN tunnels and private networking connectivity. Does that sound like you?

What are my options?

SoftLayer offers three Direct Link products that are specifically for customers looking for the most efficient connection to their SoftLayer private network. A Direct Link enables you to connect to the SoftLayer private network backbone with low latency speeds—up to 10Gbps using fiber cross-connect patches directly into the SoftLayer private network. A Direct Link is used to connect to a SoftLayer private network within the same geographical location of the physical cross-connect. (An add-on is available that enables you to connect to any of your SoftLayer private networks on a global scale.)

Direct Link Network Service Provider


The Direct Link NSP option allows you to create a cross-connect using single-mode fiber from one of our PoP locations onto the SoftLayer private backbone. You’ll have a Network Service Provider of your own preference that provides you with connectivity from your on-prem location to the SoftLayer PoP. This could be an “in-facility” cross-connect to your own equipment, MPLS, Metro WAN, or Fiber provider. The Direct Link NSP is the top-tier connectivity option we offer pertaining to private networking connectivity onto the SoftLayer private backbone.

Direct Link Cloud Exchange Provider


A cloud exchange provider is a carrier/network provider that is already connected to SoftLayer using multi-tenant, high capacity links. This allows you to purchase a virtual circuit at this provider and a Direct Link cloud exchange link at SoftLayer at reduced costs, because the physical connectivity from SoftLayer to the cloud exchange provider is already in place and shared amongst other customers.

Direct Link Colocation Provider


If your gear is co-located in a cabinet purchased via SoftLayer that’s in the same facility near or adjacent to a SoftLayer data center or POD, this option would work for you. Similar to the NSP option, this is a single-mode fiber but there’s no need to connect to a SoftLayer PoP location first—you can connect directly from your cabinet to the relevant SoftLayer data center.

How do you communicate over a Direct Link?

The SoftLayer Direct Link service is a routed Layer 3 service. Routing options are: routing using a SoftLayer assigned subnet, NAT, GRE or IPsec tunnels, VRF, and BGP.

Routing
We directly bind the 172.x.x.x IP block to your remote hosts that need to communicate with your SoftLayer infrastructure. You can either renumber your existing hosts on the remote networks or bind these as secondary IPs and setup appropriate static routes on the host. You can then use the 172.x.x.x IP space to communicate with the 10.x.x.x IP's of your SoftLayer hosts as necessary. Routing via BGP is optional.

NAT
With NAT, SoftLayer will assign you a block of IPs from the 172.16.0.0/12 IP block to NAT into a device from your remote network to prevent IP conflicts with the SoftLayer 10.x.x.x IP range(s) assigned.

GRE / IPsec Tunneling
You can create a GRE or IPSEC tunnel between the remote network and your infrastructure here at SoftLayer. This allows you to use whatever IP space you want on the SoftLayer side and route back across the tunnel to the remote network. With that being said, this is a configuration that will have to be managed and supported by you, independent of SoftLayer. Furthermore, this configuration could break connectivity to the SoftLayer services network if you use a 10.x.x.x block that SoftLayer has in use for services. This solution will also require that each host needing connectivity to the SoftLayer services network and the remote network have two IPs assigned (one from the SL 10.x.x.x block, and one from the remote network block) and static routes setup on the host to ensure traffic is routed appropriately. You will not be able to assign whatever IP space you want directly on the SoftLayer hosts (BYOIP) and have it routable on the SoftLayer network inherently. The only way to do this is as outlined above and is not supported by SoftLayer.

VRF
You can opt-in to utilizing a VRF (Virtual Routing and Forwarding) instance. This allows the customer to either utilize their own remote IP addresses or overlap with a large majority of the SoftLayer infrastructure; however, you must be aware that if you utilize the 10.x.x.x network you still cannot overlap with your hosts within SoftLayer nor within the SoftLayer services network (10.0.0.0/14 and 10.200.0.0/14). You will not be able to set any of the following for your remote prefixes: 10.0.0.0/14, 10.200.0.0/14, 10.198.0.0/15, 169.254.0.0/16, 224.0.0.0/4, and any IP ranges assigned to your VLANs on the SoftLayer platform. When choosing the VRF option, the ability to use SoftLayer VPN services for management of your servers will no longer be possible. Routing via BGP is optional.

Example:

FAQ

Will I need to provide my own cross-connect?
Yes, you will need to order your own cross-connect at your data center of choice—to be connected to the SoftLayer switch port described in the LOA (Letter of Authorization) provided.

What kind of cross-connects are supported?
We strictly use Single Mode Fiber (SMF). We do not accept MMF or Copper.

What is the default size of the remote 172.16.*.* subnet assigned?
Unless otherwise requested, Direct Link customers will be assigned a /24 (256 IPs) subnet.

Which IP block has been reserved for SoftLayer servers on the backend?
We've allocated the entire 10.0.0.0/8 block for use on the SL private network. Specifically, 10.0.0.0/14 has been ear-marked for services. Here’s the full list of service subnets: http://knowledgelayer.softlayer.com/faqs/196#154

Which IP block has been reserved for point-to-point SoftLayer XCR to customer router?
10.254.0.0/16 range. We normally allocate either a /30 or /31 subnet for the point-to-point connection (between our XCR and their equipment on the other end of the Direct Link).

Does Direct Link support jumbo frames?
Yes, just like the private SoftLayer network Direct Link can support up to MTU (Maximum Transmission Unit) 9000-size jumbo frames.

Pricing and locations

A list of available locations and pricing can be found at www.softlayer.com/direct-link.

-Mathijs Dubbe

January 6, 2016

Do You Speak SoftLayer Object Storage?

So you’ve made the decision to utilize object storage at SoftLayer. Great! But are you and your applications fluent in object storage? Do you know how to transfer data to SoftLayer object storage as well as modify and delete objects? How about when to use APIs and when to use storage gateways? If not, you’re not alone.

We’ve found that most IT professionals understand the difference between “traditional” (i.e., file and block) storage and object storage. They have difficulty, however, navigating the methods to interact with SoftLayer’s object storage service that is based on OpenStack Swift. This is understandable because traditional storage systems expose volumes and or shares that can be mounted and consumed via iSCSI, NFS, or SMB protocols.

That’s not the case with object storage, including the object storage service offered by SoftLayer. Data is only accessed via the use of REST APIs and language bindings, third-party applications supporting SFTP, the SoftLayer customer portal, or via storage gateways.

The solutions are outlined below, including guidance on when to utilize each access method. Figure 1 provides a high level overview of the available options and their purpose.



Figure 1: Object storage data access methods

REST APIs and Language Bindings
The first and possibly most flexible method to access SoftLayer object storage is via REST APIs and language bindings. These APIs and bindings give you the ability to interact with SoftLayer object storage via command line or programmatically. As a result, you can create scripts to perform a file upload, download certain objects, and modify metadata related to the object. Additionally, the current support for PHP, Java, Ruby, and Python bindings give application developers the flexibility to support SoftLayer object storage in their applications.

While this method is flexible in terms of capabilities, it does assume the user has knowledge and experience writing scripts, programs, and applications. REST APIs and language bindings aren’t the best methods for IT organizations that want to integrate existing environment backup, archive, and disaster recovery solutions. These solutions typically require traditional storage mount points, which REST APIs and language bindings don’t provide.

Third-Party Applications
The second method is to use third-party applications that support SFTP. This method abstracts the use of REST APIs and gives users the ability to upload, download, and delete objects via a GUI. However, you won’t have the ability to modify metadata when using an SFTP client. Additionally, third-party applications have a 5GB upload limit placed on each object by SoftLayer and OpenStack Swift. If an object greater than 5GB needs to be uploaded, you have to follow the OpenStack method of creating large objects on object storage to assure successful and efficient object upload. Unless you’re comfortable with this methodology, it’s strongly recommended that you use either the REST APIs or storage gateway solutions to access files over 5GB.

SoftLayer Customer Portal
The third method to access SoftLayer object storage is to simply use the SoftLayer customer portal. By using the portal, you have the ability to add containers, add files to containers, delete files from containers, modify metadata, and enable CDN capabilities. As with the SFTP method of accessing the object store, you can upload an unlimited number of files as long as each file does not exceed 20MB in size. Also, there is no bulk upload option within the customer portal; users must select and upload on a per-file basis. While using the portal is simple, it does provide some limitations and is best for users only wanting to upload a few files that occupy 20MB or less.

Storage Gateways
The last method to access and utilize SoftLayer object storage is storage gateways. Unlike other methods, storage gateways are unique. They’re able to expose traditional storage protocols like iSCSI, NFS, CIFS, and SMB and translate the read/write/modify commands into REST API calls against the object storage service. As a result, these devices offer an easier path to consume SoftLayer object storage for businesses looking to integrate their on-premises environment with the cloud. Some storage gateways also have the ability to compress, deduplicate, and encrypt data in-flight and at-rest. Storage gateways work best with organizations looking to integrate existing applications requiring traditional storage access methods (like backup software) with object storage or to securely transfer and store data to cloud object storage.

Summary
While there are many methods to access SoftLayer object storage, it’s important that you select an option that best meets your requirements relating to data access, security, and integration. For example, if you’re writing an application that requires object storage, you would most likely choose to interact with object storage via REST APIs or use language bindings. Or, if you simply need to integrate existing applications in your environment to cloud object storage, storage gateway would be the best option. In all cases, make sure you can meet your requirements with the appropriate method.

Table 1 lists sample requirements and shows whether each option meets the requirements. Use it to help you with your decision making process:



Table 1: Decision making tool

Click here for more information about SoftLayer’s object storage service and click here for FAQs on object storage.

Click here for information about SoftLayer’s REST-APIs and language bindings.

-Daniel De Araujo & Naeem Altaf

Categories: 
Subscribe to technology