Posts Tagged 'API'

July 10, 2015

GPU Accelerated Supercomputing Comes to IBM’s SoftLayer Cloud Service

NVIDIA GPU technology powers some of the world’s fastest supercomputers. In fact, GPU technology is at the heart of the current #1 U.S. system, Titan, located at Oak Ridge National Labs. It will also be an important part of Titan’s forthcoming successor, Summit, an advanced new supercomputer based on next-generation, ultra-high performance GPU-accelerated OpenPOWER servers.

But, not everyone has access to these monster machines for their high-performance computing, deep learning, and scientific computing work. That’s why NVIDIA is working with IBM to make supercomputing-class technology more accessible to reserachers, engineers, developers, and other HPC users.

IBM Cloud announced earlier this week that NVIDIA Tesla K80 dual-GPU accelerators are now available on SoftLayer bare metal cloud servers. The team worked closely together to test and tune the speedy delivery of NVIDIA Tesla K80 enabled servers. The Tesla K80 GPU accelerators are the flagship technology of our Tesla Accelerated Computing Platform, delivering 10 times higher performance than today’s fastest CPU for a range of deep learning, data analytics and HPC applications.

Bringing Tesla K80 GPUs to SoftLayer means that more researchers and engineers worldwide will have access to ultra-high computing resources – without having to deal with the cost and time commitment of purchasing and maintaining their own HPC clusters. On-demand high performance computing can now be delivered in a matter of hours instead of the weeks or months it takes to build and deploy a dedicated system. Never before has bare-metal compute infrastructure been so agile. Fully populated Tesla K80 GPU nodes can be provisioned and used in two to four hours. Then, they can be de-provisioned or reassigned just as quickly.

With support for GPU accelerators, SoftLayer is providing full-scale data center resources for users to build a compute cluster, burst an existing cluster, or launch a compute intensive project—all on easy to use, cost effective, and easily accessible cloud infrastructure.

The strength of SoftLayer’s API and the experience of IBM Cloud make it easy for users to provision and reclaim resources to enable true cloud bursting for compute clusters, and controlling resources is key to controlling costs.

We’re delighted to expand the reach of GPU-accelerated computing broader than ever before. For more info on IBM Cloud’s GPU offerings on SoftLayer or to sign up, visit


Michael O’Neill is an established leader for NVIDIA. He provides specialized strategic thought leadership and technical guidance to customers on NVIDIA GRID and Tesla GPUs in virtualized environments. He works closely with business leaders to develop innovative solutions for graphical and compute heavy workloads. With over twenty years of experience in planning, developing, and implementing state of the art information systems, he has built a significant body of work empowering people to live, work and collaborate from anywhere on any device. His guidance has provided Fortune 500 companies with cloud computing solutions to help IT and service providers build private, hybrid and public clouds to deliver high-performance, elastic and cost-effective services for mobile workstyles.

April 20, 2015

The SLayer Standard Vol. 1, No. 10

The week in review. All the IBM Cloud and SoftLayer headlines in one place.

The Battle for Global Market Share
Warmer weather must be around the corner—or it could just be the cloud industry heating up. How will cloud providers profit as more and more providers push for world domination? The Economist predicts an industry change as prices drop.

IBM Partners with TI on Secure APIs for IoT
Allow me to translate: the International Business Machines Corporation is partnering with Texas Instruments to secure application program interfaces with the help of the Internet of Things. Through its collaboration with TI, IBM will create a Secure Registry Service that will provide trust and authentication practices and protocol across the value chain–from silicon embedded in devices and products to businesses and homes.

(Join the conversation at #IoTNow or #IoT.)

The U.S. Army Goes Hybrid
The U.S. Army is hoping to see a 50 percent cost savings by utilizing IBM cloud services and products. Like many customers, the Army opted for a hybrid solution for security, flexibility, and ease of scale. Read more about what IBM Cloud and SoftLayer are doing for the U.S Army and other U.S. government departments.

The Only Constant is Change
Or so said Heraclitus of Ephesus. And to keep up with the changing times, IBM has reinvented itself over and over again to stay relevant and successful. This interesting read discusses why big corporations just aren't what they used to be, what major factors have transformed the IT industry over the last couple of decades, and how IBM has been leading the change, time-after-time.


October 28, 2014

SoftLayer and AWS: What's the Difference?

People often compare SoftLayer with Amazon Web Services (AWS).

It’s easy to understand why. We’ve both built scalable infrastructure platforms to provide cloud resources to the same broad range of customers—from individual entrepreneurs to the world’s largest enterprises.

But while the desire to compare is understandable, the comparison itself isn’t quite apt. The SoftLayer platform is fundamentally different from AWS.

In fact, AWS could be run on SoftLayer. SoftLayer couldn’t be run on AWS.

AWS provisions in the public cloud.

When AWS started letting customers have virtual machines deployed on the infrastructure that AWS had built for their e-commerce business, AWS accelerated the adoption of virtual server hosting within the existing world of Web hosting.

In an AWS cloud environment, customers order the computing and storage resources they need, and AWS deploys those resources on demand. The mechanics of that deployment are important to note, though.

AWS has data centers full of physical servers that are integrated with each other in a massive public cloud environment. These servers are managed and maintained by AWS, and they collectively make up the available cloud infrastructure in the facility.

AWS installs a virtualization layer (also known as hypervisor) on these physical servers to tie the individual nodes into the environment’s total capacity. When a customer orders a cloud server from AWS, this virtualization layer finds a node with the requested resources available and provisions a server image with the customer’s desired operating system, applications, etc. The entire process is quick and automated, and each customer has complete control over the resources he or she ordered.

That virtualization layer is serving a purpose, and it may seem insignificant, but it highlights a critical difference in their platform and ours:

AWS automates and provisions at the hypervisor level, while SoftLayer automates and provisions at the data center level.

SoftLayer provisions down to bare metal resources.

While many have their sights on beating AWS at its own game, SoftLayer plays a different game.

SoftLayer platform is designed to give customers complete access and control over the actual infrastructure that they need to build a solution in the cloud. Automated and remote ordering, deployment, and management of the very server, storage, and security hardware resources themselves, are hosted in our data centers so that customers don’t have to build their own facilities or purchase their own hardware to get the reliable, high performance computing they need.

Everything in SoftLayer data centers is transparent, automated, integrated, and built on an open API that customers can access directly. Every server is connected to three distinct physical networks so that public, private, and management network traffic are segmented. And our expert technical support is available for all customers, 24x7.

Notice that the automation and integration of our platform happens at the data center level. We don’t need a virtualization layer to deploy our cloud resources. As a result, we can deploy bare metal servers in the same way AWS deploys public cloud servers (though, admittedly, bare metal servers take more time to deploy than virtual servers in the public cloud). By provisioning down to a lower level in the infrastructure stack, we’re able to offer customers more choice and control in their cloud environments:

In addition to the control customers have over infrastructure resources, with our unique network architecture, their servers aren’t isolated inside the four walls of a single data center. Customers can order one server in Dallas and another in Hong Kong, and those two servers can communicate with each other directly and freely across our private network without interfering with customers’ public network traffic. So with every new data center we build, we geographically expand a unified cloud footprint. No regions. No software-defined virtual networks. No isolation.

SoftLayer vs. AWS

Parts of our cloud business certainly compete with AWS. When users compare virtual servers between us, they encounter a number of similarities. But this post isn’t about comparing and contrasting offerings in the areas in which we’re similar … it’s about explaining how we’re different:
  • SoftLayer is able to provision bare metal resources to customers. This allows customers free reign over the raw compute power of a specific server configuration. This saves the customer from the 2–3 percent performance hit from the hypervisor, and it prevents “noisy neighbors” from being provisioned alongside a customer’s virtual server. AWS does not provision bare metal resources.

  • AWS differentiates “availability zones” and “regions” for customers who want to expand their cloud infrastructure into multiple locations. SoftLayer has data centers interconnected on a global private network. Customers can select the specific SoftLayer data center location they want so they can provision servers in the exact location they desire.

  • When AWS customers move data between their AWS servers, they see “Inter-Region Data Transfer Out” and “Intra-Region Data Transfer” on their bills. If you’re moving data from one SoftLayer facility to another SoftLayer facility (anywhere in the world), that transfer is free and unmetered. And it doesn’t fight your public traffic for bandwidth.

  • With AWS, customers pay a per-GB charge for bandwidth on every bill. At SoftLayer, all of our products and services include free inbound and outbound bandwidth across our global private network and our out-of-band management network. All customers get 250GB/month on virtual and 500GB/month on bare metal for public outbound bandwidth. And customers can opt for additional public outbound bandwidth with packages on monthly cloud servers including up to 20TB bringing bandwidth costs down to less than $0.075/GB.*

  • SoftLayer offers a broad range of management, monitoring, and support options to customers at no additional cost. AWS charges for monitoring based on metrics, frequency, and number of alarms per resource. And having access to support requires an additional monthly cost.

Do SoftLayer and AWS both offer Infrastructure as a Service? Yes.

Does that make SoftLayer and AWS the same? No.

*This paragraph was revised on July 28, 2015 to reflect updated pricing. For more information, see the SoftLayer Pricing page.


June 30, 2014

OpenNebula 4.8: SoftLayer Integration

In the next month, the team of talented developers at C12G Labs will be rolling out OpenNebula 4.8, and in that release, they will be adding integration with SoftLayer! If you aren't familiar with OpenNebula, it's a full-featured open-source platform designed to bring simplicity to managing private and hybrid cloud environments. Using a combination of existing virtualization technologies with advanced features for multi-tenancy, automatic provisioning, and elasticity, OpenNebula is driven to meet the real needs of sysadmins and devops.

In OpenNebula 4.8, users can quickly and seamlessly provision and manage SoftLayer cloud infrastructure through OpenNebula's simple, flexible interface. From a single pane of glass, you can create virtual data center environments, configure and adjust cloud resources, and automatic execution and scaling of multi-tiered applications. If you don't want to leave the command line, you can access the same functionality from a powerful CLI tool or through the OpenNebula API.

When the C12G Labs team approached us with the opportunity to be featured in the next release of their platform, several folks from the office were happy to contribute their time to make the integration as seamless as possible. Some of our largest customers have already begun using OpenNebula to manage their hybrid cloud environments, so official support for the SoftLayer cloud in OpenNebula is a huge benefit to them (and to us). The result of this collaboration will be released under the Apache license, and as such, it will be freely available to the public.

To give you an idea of how easy OpenNebula is to use, they created an animated GIF to show the process of creating and powering down virtual machines, creating a server image, and managing account settings:


We'd like to give a big shout-out to the C12G Labs team for all of the great work they've done on the newest version of OpenNebula, and we look forward to seeing how the platform continues to grow and improve in the future.


June 9, 2014

Visualizing a SoftLayer Billing Order

In my time spent as a data and object modeler, I’ve dealt with both good and bad examples of model visualization. As an IBMer through the Rational acquisition, I have been using modeling tools for a long time. I can appreciate a nice diagram shining a ray of light on an object structure, and abhor a behemoth spaghetti diagram.

When I started studying SoftLayer’s API documentation, I saw both the relational and hierarchical nature of SoftLayer’s concept model. The naming convention of API services and data types embodies their hierarchical structure. While reading about “relational properties” in data types, I thought it would be helpful to see diagrams showing relationships between services and data types versus clicking through reference pages. After all, diagramming data models is a valuable complement to verbal descriptions.

One way people can deal with complex data models is to digest them a little at a time. I can’t imagine a complete data model diagram of SoftLayer’s cloud offering, but I can try to visualize small portions of it. In this spirit, after reviewing article and blog entries on creating product orders using SoftLayer’s API, I drew an E-R diagram, using IBM Rational Software Architect, of basic order elements.

The diagram, Figure 1, should help people understand data entities involved in creating SoftLayer product orders and the relationships among the entities. In particular, IBM Business Partners implementing custom re-branded portals to support the ordering of SoftLayer resources will benefit from visualization of the data model. Picture this!

Figure 1. Diagram of the SoftLayer Billing Order

A user account can have many associated billing orders, which are composed of billing order items. Billing order items can contain multiple order containers that hold a product package. Each package can have several configurations including product item categories. They can be composed of product items with each item having several possible prices.


Andrew Hoppe, Ph.D., is a Worldwide Channel Solutions Architect for SoftLayer, an IBM Company.

February 6, 2014

Building a Bridge to the OpenStack API

OpenStack is experiencing explosive growth in the cloud market. With more than 200 companies contributing code to the source and new installations coming online every day, OpenStack is pushing hard to become a global standard for cloud computing. Dozens of useful tools and software products have been developed using the OpenStack API, so a growing community of administrators, developers and IT organizations have access to easy-to-use, powerful cloud resources. This kind of OpenStack integration is great for users on a full OpenStack cloud, but it introduces a challenge to providers and users on other cloud platforms: Should we consider deploying or moving to an OpenStack environment to take advantage of these tools?

If a cloud provider spends years developing a unique platform with a proprietary API, implementing native support for the OpenStack API or deploying a full OpenStack solution may be cost prohibitive, even with significant customer and market demand. The provider can either bite the bullet to implement OpenStack compatibility, hope that a third party library like libclouds or fog is updated to support its API, or choose to go it alone and develop an ecosystem of products around its own API.

Introducing Jumpgate

When we were faced with this situation at SoftLayer, we chose a fourth option. We wanted to make the process of creating an OpenStack-compatible API simpler and more modular. That's where Jumpgate was born. Jumpgate is a middleware that acts as a compatibility layer between the OpenStack API and a provider's proprietary API. Externally, it exposes endpoints that adhere to OpenStack's published and accepted API specification, which it then translates into the provider's API using a series of drivers. Think of it as a mechanism to enable passing from one realm/space into another — like the jumpgates featured in science fiction works.


How Jumpgate Works
Let's take a look at a high-level example: When you want to create a new virtual instance on OpenStack, you might use the Horizon dashboard or the Nova command line client. When you issue the request, the tool first makes a REST call to a Keystone endpoint for authentication, which returns an authorization token. The client then makes another REST call to a Nova endpoint, which manages the computing instances, to create the actual virtual instance. Nova may then make calls to other tools within the cluster for networking (Quantum), image information (Glance), block storage (Cinder), or more. In addition, your client may also send requests directly to some of these endpoints to query for status updates, information about available resources, and so on.

With Jumpgate, your tool first hits the Jumpgate middleware, which exposes a Keystone endpoint. Jumpgate takes the request, breaks it apart into its relevant pieces, then loads up your provider's appropriate API driver. Next, Jumpgate reformats your request into a form that the driver supports and sends it to the provider's API endpoint. Once the response comes back, Jumpgate again uses the driver to break apart the proprietary API response, reformats it into an OpenStack compatible JSON payload, and sends it back to your client. The result is that you interact with an OpenStack-compatible API, and your cloud provider processes those interactions on their own backend infrastructure.

Internally, Jumpgate is a lightweight middleware built in Python using the Falcon Framework. It provides endpoints for nearly every documented OpenStack API call and allows drivers to attach handlers to these endpoints. This modular approach allows providers to implement only the endpoints that are of the highest importance, rolling out OpenStack API compatibility in stages rather than in one monumental effort. Since it sits alongside the provider's existing API, Jumpgate provides a new API interface without risking the stability already provided by the existing API. It's a value-add service that increases customer satisfaction without a huge increase in cost. Once full implementations is finished, a provider with a proprietary cloud platform can benefit from and offer all the tools that are developed to work with the OpenStack API.

Jumpgate allows providers to test the proper OpenStack compatibility of their drivers by leveraging the OpenStack Tempest test suite. With these tests, developers run the full suite of calls used by OpenStack itself, highlighting edge cases or gaps in functionality. We've even included a helper script that allows Tempest to only run a subset of tests rather than the entire suite to assist with a staged rollout.

Current Development
Jumpgate is currently in an early alpha stage. We've built the compatibility framework itself and started on the SoftLayer drivers as a reference. So far, we've implemented key endpoints within Nova (computing instances), Keystone (identification and authorization), and Glance (image management) to get most of the basic functionality within Horizon (the web dashboard) working. We've heard that several groups outside SoftLayer are successfully using Jumpgate to drive products like Trove and Heat directly on SoftLayer, which is exciting and shows that we're well beyond the "proof of concept" stage. That being said, there's still a lot of work to be done.

We chose to develop Jumpgate in the open with a tool set that would be familiar to developers working with OpenStack. We're excited to debut this project for the broader OpenStack community, and we're accepting pull requests if you're interested in contributing. Making more clouds compatible with the OpenStack API is important and shouldn’t be an individual undertaking. If you're interested in learning more or contributing, head over to our in-flight project page on GitHub: SoftLayer Jumpgate. There, you'll find everything you need to get started along with the updates to our repository. We encourage everyone to contribute code or drivers ... or even just open issues with feature requests. The more community involvement we get, the better.


April 10, 2013

Plivo: Tech Partner Spotlight

We invite each of our featured SoftLayer Tech Marketplace Partners to contribute a guest post to the SoftLayer Blog, and this week, we're happy to welcome Mike Lauricella from Plivo. Plivo is an open communications and messaging platform with advanced features, simple APIs, easy management and volume pricing.

Company Website:
Tech Partners Marketplace:

Bridging the Gap Between the Web and Telephony

Businesses face a fundamental challenge in the worlds of telephony and messaging: Those worlds move too slowly, require too much telecom knowledge and take too long to adopt. As a result, developers often forgo phone and SMS functionality in their applications because the learning curves are so steep, and the dated architecture seems like a foreign language. Over the last twenty years, the web has evolved a lot faster than telephony, and that momentum only widens the gap between the "old" telecom world and the "new" Internet world. Plivo was created to bridge that gap and make telephony easy for developers to understand and incorporate into their applications with simple tools and APIs.

I could bore you to tears by describing the ins and outs of what we've learned about telephony and telecom since Plivo was founded, but I'd rather show off some of the noteworthy ways our customers have incorporated Plivo in their own businesses. After all, seeing those real-world applications is much more revealing about what Plivo does than any description of the nuts and bolts of our platform, right?

Conferencing Solution
The purest use-cases for Plivo are when our customers can simply leverage powerful telephony functionality. A perfect example is a conferencing solution one of our customers created to host large-scale conferences with up to 200 participants. The company integrated the solution into their product and CRM so that sales reps and customers could jump on conference calls quickly. With that integration, the executive management team can keep track of all kinds of information about the calls ... whether they're looking to find which calls resulted in closed sales or they just want to see the average duration of a conference call for a given time frame.

Call Tracking
Beyond facilitate conference calls quickly and seamlessly, many businesses have started using Plivo's integration to incorporate call tracking statistics in their environments. Call tracking is big business because information about who called what number, when they called, how long they talked and the result of the call (sale, no sale, follow up) can determine whether the appropriate interaction has taken place with prospects or customers.

Two Factor Authentication
With ever-increasing concerns about security online, we've seen a huge uptick in developers that come to Plivo for help with two factor authentication for web services. To ensure that a new site registrant is a real person who has provided a valid phone number (to help cut down on potential fraud), they use Plivo to send text messages with verification codes to those new registrant.

Mass Alert Messaging
Because emergencies can happen at any time, our customers have enlisted Plivo's functionality to send out mass alerts via phone calls and SMS messages when their customers are affected by an issue and need to be contacted. These voice and text messages can be sent quickly and easily with our automated tools, and while no one ever wants to deal with an emergency, having a solid communication lifeline provides some peace of mind.

An emerging new standard for communications is WebRTC — open project that enables web browsers with Real-Time Communications (RTC) capabilities. WebRTC make communications a feature of the Web without plugins or complex SIP clients. Plivo already supports WebRTC, and even though the project is relatively young, it's already being used in some amazing applications.

These use-cases are only the tip of the iceberg when it comes to how our customers are innovating on our platform, but I hope it helps paint a picture of the kinds of functionality Plivo enables simply and quickly. If you've been itching to incorporate telephony into your application, before you spending hours of your life poring over complex telecom architecture requirements, head over to to see how easy we can make your life. We offer free developer accounts where you can start to make calls to other Plivo users and other SIP endpoints immediately, and we'd love to chat with you about how you can leverage Plivo to make your applications communicate.

If you have any questions, feel free to drop us a note at, and we'll get back to you with answers.

-Mike Lauricella, Plivo

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
March 22, 2013

Social Media for Brands: Monitor Twitter Search via Email

If you're responsible for monitoring Twitter for conversations about your brand, you're faced with a challenge: You need to know what people are saying about your brand at all times AND you don't want to live your entire life in front of Twitter Search.

Over the years, a number of social media applications have been released specifically for brand managers and social media teams, but most of those applications (especially the free/inexpensive ones) differentiate themselves only by the quality of their analytics and how real-time their data is reported. If that's what you need, you have plenty of fantastic options. Those differentiators don't really help you if you want to take a more passive role in monitoring Twitter search ... You still have to log into the application to see your fancy dashboards with all of the information. Why can't the data come to you?

About three weeks ago, Hazzy stopped by my desk and asked if I'd help build a tool that uses the Twitter Search API to collect brand keywords mentions and send an email alert with those mentions in digest form every 30 minutes. The social media team had been using Twilert for these types of alerts since February 2012, but over the last few months, messages have been delayed due to issues connecting to Twitter search ... It seems that the service is so popular that it hits Twitter's limits on API calls. An email digest scheduled to be sent every thirty minutes ends up going out ten hours late, and ten hours is an eternity in social media time. We needed something a little more timely and reliable, so I got to work on a simple "Twitter Monitor" script to find all mentions of our keyword(s) on Twitter, email those results in a simple digest format, and repeat the process every 30 minutes when new mentions are found.

With Bear's Python-Twitter library on GitHub, connecting to the Twitter API is a breeze. Why did we use Bear's library in particular? Just look at his profile picture. Yeah ... 'nuff said. So with that Python wrapper to the Twitter API in place, I just had to figure out how to use the tools Twitter provided to get the job done. For the most part, the process was very clear, and Twitter actually made querying the search service much easier than we expected. The Search API finds all mentions of whatever string of characters you designate, so instead of creating an elaborate Boolean search for "SoftLayer OR #SoftLayer OR @SoftLayer ..." or any number of combinations of arbitrary strings, we could simply search for "SoftLayer" and have all of those results included. If you want to see only @ replies or hashtags, you can limit your search to those alone, but because "SoftLayer" isn't a word that gets thrown around much without referencing us, we wanted to see every instance. This is the code we ended up working with for the search functionality:

def status_by_search(search):
    statuses = api.GetSearch(term=search)
    results = filter(lambda x: > get_log_value(), statuses)
    returns = []
    if len(results) > 0:
        for result in results:
        return returns, len(returns)

If you walk through the script, you'll notice that we want to return only unseen Tweets to our email recipients. Shortly after got the Twitter Monitor up and running, we noticed how easy it would be to get spammed with the same messages every time the script ran, so we had to filter our results accordingly. Twitter's API allows you to request tweets with a Tweet ID greater than one that you specify, however when I tried designating that "oldest" Tweet ID, we had mixed results ... Whether due to my ignorance or a fault in the implementation, we were getting fewer results than we should. Tweet IDs are unique and numerically sequential, so they can be relied upon as much as datetime (and far easier to boot), so I decided to use the highest Tweet ID from each batch of processed messages to filter the next set of results. The script stores that Tweet ID and uses a little bit of logic to determine which Tweets are newer than the last Tweet reported.

def new_tweets(results):
    if get_log_value() < max( for result in results):
        set_log_value(max( for result in results))
        return True
def get_log_value():
    with open('', 'r') as f:
        return int(
def set_log_value(messageId):
    with open('', 'w+') as f:

Once we culled out our new Tweets, we needed our script to email those results to our social media team. Luckily, we didn't have to reinvent the wheel here, and we added a few lines that enabled us to send an HTML-formatted email over any SMTP server. One of the downsides of the script is that login credentials for your SMTP server are stored in plaintext, so if you can come up with another alternative that adds a layer of security to those credentials (or lets you send with different kinds of credentials) we'd love for you to share it.

From that point, we could run the script manually from the server (or a laptop for that matter), and an email digest would be sent with new Tweets. Because we wanted to automate that process, I added a cron job that would run the script at the desired interval. As a bonus, if the script doesn't find any new Tweets since the last time it was run, it doesn't send an email, so you won't get spammed by "0 Results" messages overnight.

The script has been in action for a couple of weeks now, and it has gotten our social media team's seal of approval. We've added a few features here and there (like adding the number of Tweets in an email to the email's subject line), and I've enlisted the help of Kevin Landreth to clean up the code a little. Now, we're ready to share the SoftLayer Twitter Monitor script with the world via GitHub!

SoftLayer Twitter Monitor on GitHub

The script should work well right out of the box in any Python environment with the required libraries after a few simple configuration changes:

  • Get your Twitter Customer Secret, Access Token and Access Secret from
  • Copy/paste that information where noted in the script.
  • Update your search term(s).
  • Enter your mailserver address and port.
  • Enter your email account credentials if you aren't working with an open relay.
  • Set the self.from_ and values to your preference.
  • Ensure all of the Python requirements are met.
  • Configure a cron job to run the script your desired interval. For example, if you want to send emails every 10 minutes: */10 * * * * <path to python> <path to script> 2>&1 /dev/null

As soon as you add your information, you should be in business. You'll have an in-house Twitter Monitor that delivers a simple email digest of your new Twitter mentions at whatever interval you specify!

Like any good open source project, we want the community's feedback on how it can be improved or other features we could incorporate. This script uses the Search API, but we're also starting to play around with the Stream API and SoftLayer Message Queue to make some even cooler tools to automate brand monitoring on Twitter.

If you end up using the script and liking it, send SoftLayer a shout-out via Twitter and share it with your friends!


February 15, 2013

Cedexis: SoftLayer "Master Model Builder"

Think of the many components of our cloud infrastrucutre as analogous to LEGO bricks. If our overarching vision is to help customers "Build the Future," then our products are "building blocks" that can be purposed and repurposed to create scalable, high-performance architecture. Like LEGO bricks, each of our components is compatible with every other component in our catalog, so our customers are essentially showing off their Master Model Builder skills as they incorporate unique combinations of infrastructure and API functionality into their own product offerings. Cedexis has proven to be one of those SoftLayer "Master Model Builders."

As you might remember from their Technology Partner Marketplace feature, Cedexis offers a content and application delivery system that helps users balance traffic based on availability, performance and cost. They've recently posted a blog about how they integrated the SoftLayer API into their system to detect an unresponsive server (disabled network interface), divert traffic at the DNS routing level and return it as soon as the server became available again (re-enabled the network interface) ... all through the automation of their Openmix service:

They've taken the building blocks of SoftLayer infrastructure and API connectivity to create a feature-rich platform that improves the uptime and performance for sites and applications using Openmix. Beyond the traffic shaping around unreachable servers, Cedexis also incorporated the ability to move traffic between servers based on the amount of bandwidth you have remaining in a given month or based on the response times it sees between servers in different data centers. You can even make load balancing decisions based on SoftLayer's server management data with Fusion — one of their newest products.

The tools and access Cedexis uses to power these Openmix features are available to all of our customers via the SoftLayer API, and if you've ever wondered how to combine our blocks into your environment in unique, dynamic and useful ways, Cedexis gives a perfect example. In the Product Development group, we love to see these kinds of implementations, so if you're using SoftLayer in an innovative way, don't keep it a secret!


December 19, 2012

SoftLayer API: Streamline. Simplify.

Building an API is a bit of a balancing act. You want your API to be simple and easy to use, and you want it to be feature-rich and completely customizable. Because those two desires happen to live on opposite ends of the spectrum, every API finds a different stasis in terms of how complex and customizable they are. The SoftLayer API was designed to provide customers with granular control of every action associated with any product or service on our platform; anything you can do in our customer portal can be done via our API. That depth of functionality might be intimidating to developers looking to dive in quickly and incorporate the SoftLayer platform into their applications, so our development team has been working to streamline and simplify some of the most common API services to make them even more accessible.

SoftLayer API

To get an idea of what their efforts look like in practice, Phil posted an SLDN blog with a perfect example of how they simplified cloud computing instance (CCI) creation via the API. The traditional CCI ordering process required developers to define nineteen data points:

Domain name
Package Id
Location Id
Quantity to order
Number of cores
Amount of RAM
Remote management options
Port speeds
Public bandwidth allotment
Primary subnet size
Disk size
Operating system
VPN Management - Private Network
Vulnerability Assessments & Management

While each of those data points is straightforward, you still have to define nineteen of them. You have all of those options when you check out through our shopping cart, so it makes sense that you'd have them in the API, but when it comes to ordering through the API, you don't necessarily need all of those options. Our development team observed our customers' API usage patterns, and they created the slimmed-down and efficient SoftLayer_Virtual_Guest::createObject — a method that only requires seven data points:

Domain name
Number of cores
Amount of RAM
Hourly/monthly billing
Local vs SAN disk
Operating System

Without showing you a single line of code, you see the improvement. Default values were established for options like Port speeds and Monitoring based on customer usage patterns, and as a result, developers only have to provide half the data to place a new CCI order. Because each data point might require multiple lines of code, the volume of API code required to place an order is slimmed down even more. The best part is that if you find yourself needing to modify one of the now-default options like Port speeds or Monitoring, you still can!

As the development team finds other API services and methods that can be streamlined and simplified like this one, they'll ninja new solutions to make the API even more accessible. Have you tried coding to the SoftLayer API yet? If not, what's the biggest roadblock for you? If you're already a SLAPI coder, what other methods do you use often that could be streamlined?


Subscribe to api