Posts Tagged 'Softlayer'

April 20, 2015

The SLayer Standard Vol. 1, No. 10

The week in review. All the IBM Cloud and SoftLayer headlines in one place.

The Battle for Global Market Share
Warmer weather must be around the corner—or it could just be the cloud industry heating up. How will cloud providers profit as more and more providers push for world domination? The Economist predicts an industry change as prices drop.

IBM Partners with TI on Secure APIs for IoT
Allow me to translate: the International Business Machines Corporation is partnering with Texas Instruments to secure application program interfaces with the help of the Internet of Things. Through its collaboration with TI, IBM will create a Secure Registry Service that will provide trust and authentication practices and protocol across the value chain–from silicon embedded in devices and products to businesses and homes.

(Join the conversation at #IoTNow or #IoT.)

The U.S. Army Goes Hybrid
The U.S. Army is hoping to see a 50 percent cost savings by utilizing IBM cloud services and products. Like many customers, the Army opted for a hybrid solution for security, flexibility, and ease of scale. Read more about what IBM Cloud and SoftLayer are doing for the U.S Army and other U.S. government departments.

The Only Constant is Change
Or so said Heraclitus of Ephesus. And to keep up with the changing times, IBM has reinvented itself over and over again to stay relevant and successful. This interesting read discusses why big corporations just aren't what they used to be, what major factors have transformed the IT industry over the last couple of decades, and how IBM has been leading the change, time-after-time.

-JRL

Categories: 
April 10, 2015

The SLayer Standard Vol. 1, No. 9

The week in review. All the IBM Cloud and SoftLayer headlines in one place.

Welcome to the Masters
If you’re not practicing your swing this weekend, you’re watching the Masters. Over the next couple of days, professional golfers will seek their shot at landing the coveted Green Jacket. And while everyone might be watching the leaderboard, IBM will be hard at work in what they are calling the “bunker,” located in a small green building at the Augusta National Golf Club.

What does IBM have to do with the Masters? Everything.

Read how IBM, backed by the power of the SoftLayer cloud, is making the Masters website virtually uncrashable.

And for those that can’t line the greens to watch your favorite player, IBM is utilizing the lasers the Golf Club has placed around the course to track the ball as it flies from hole-to-hole. Learn more about the golf-ball tracking technology here.

Open Happiness
In a move to streamline tech operations and cut costs, Coca-Cola Amatil is partnering with IBM Cloud to move some of its platforms to SoftLayer data centers in Sydney and Melbourne—a deal sure to open happiness.

"The move to SoftLayer will provide us with a game-changing level of flexibility, resiliency and reliability to ramp up and down capacity as needed. It will also remove the need for large expenditure on IT infrastructure." - Barry Simpson, CIO, Coca-Cola Amatil

Read more about the new CCA cloud environment and the five-year, multimillion-dollar deal.

-JRL

Categories: 
April 1, 2015

The SLayer Standard Vol. 1 No. 8

The week in review. All the IBM Cloud and SoftLayer headlines in one place.

Sunny Skies for IBM Cloud and The Weather Company
IBM made big headlines on Tuesday when it announced they would team up with The Weather Company boasting “100 percent chance of smarter business forecasts.”

Bloomberg sits down with Bob Picciano, IBM Analytics Senior VP, and David Kenny, The Weather Company CEO to discuss what makes this different than other companies that have analyzed the weather in the past. Using Watson Analytics and the Internet of Things, the partnership will transform business decision-making based on weather behavior. Read how IBM’s $3 billion investment in the Internet of Things will collect weather data from 100,000 weather stations around the world and turn it into meaningful data for business owners.

Indian Startups Choose SoftLayer
According to the National Association of Software and Services Companies (NASSCOM), India has the world’s third largest and the fastest-growing startup ecosystem. Like many SoftLayer startup customers, Goldstar Healthcare, Vtiger, Clematix, Ecoziee Marketing utilize the SoftLayer cloud infrastructure platform to “begin on a small scale and then expand rapidly to meet workload demands without having to worry about large investments in infrastructure development.”

New SoftLayer Storage Offerings
Last week, SoftLayer announced the launch of block storage and file storage complete with Endurance- and Performance-class tiers. The media was fast to report the new offerings that provide customers more choice, flexibility, and control for their storage needs and workloads.

“ … SoftLayer’s focus on tailored capacity and performance needs coincides with the trend in the cloud market of customizing technology based on different application requirements.”– IBM Splits SoftLayer Cloud Storage Into Endurance, Performance Tiers

“In the age of the cloud, the relationship between cloud storage capacity and I/O performance has officially become divorced.” – IBM Falls Into Cloud Storage Pricing Line

Pick your favorite online tech media and read all about it: SiliconANGLE, Computer Weekly, Data Center Knowledge, CRN, V3, Cloud Computing Intelligence, Storage Networking Solutions UK, and DCS Europe.

#IBMandTwitter
There are more than half a billion tweets posted to Twitter every day. IBM is teaming up with Twitter to turn those “tweets into insights for more than 100 organizations around the world.” Leon Sun of The Motley Fool takes a closer look at what the deal means to IBM and Twitter.

“Twitter provides a powerful new lens through which to look at the world. This partnership, drawing on IBM’s leading cloud-based analytics platform, will help clients enrich business decisions with an entirely new class of data. This is the latest example of how IBM is reimaging work.” – Ginni Romety, IBM Chairman, President and CEO

-JRL

Categories: 
March 30, 2015

The Importance of Data's Physical Location in the Cloud

If top-tier cloud providers use similar network hardware in their data centers and connect to the same transit and peering bandwidth providers, how can SoftLayer claim to provide the best network performance in the cloud computing industry?

Over the years, I've heard variations of that question asked dozens of times, and it's fairly easy to answer with impressive facts and figures. All SoftLayer data centers and network points of presence (PoPs) are connected to our unique global network backbone, which carries public, private, and management traffic to and from servers. Using our network connectivity table, some back-of-the-envelope calculations reveal that we have more than 2,500Gbps of bandwidth connectivity with some of the largest transit and peering bandwidth providers in the world (and that total doesn't even include the private peering relationships we have with other providers in various regional markets). Additionally, customers may order servers with up to 10Gbps network ports in our data centers.

For the most part, those stats explain our differentiation, but part of the bigger network performance story is still missing, and to a certain extent it has been untold—until today.

The 2,500+Gbps of bandwidth connectivity we break out in the network connectivity table only accounts for the on-ramps and off-ramps of our network. Our global network backbone is actually made up of an additional 2,600+Gbps of bandwidth connectivity ... and all of that backbone connectivity transports SoftLayer-related traffic.

This robust network architecture streamlines the access to and delivery of data on SoftLayer servers. When you access a SoftLayer server, the network is designed to bring you onto our global backbone as quickly as possible at one of our network PoPs, and when you're on our global backbone, you'll experience fewer hops (and a more direct route that we control). When one of your users requests data from your SoftLayer server, that data travels across the global backbone to the nearest network PoP, where it is handed off to another provider to carry the data the "last mile."

With this controlled environment, I decided to undertake an impromptu science experiment to demonstrate how location and physical distance affect network performance in the cloud.

Speed Testing on the SoftLayer Global Network Backbone

I work in the SoftLayer office in downtown Houston, Texas. In network-speak, this location is HOU04. You won't find that location on any data center or network tables because it's just an office, but it's connected to the same global backbone as our data centers and network points of presence. From my office, the "last mile" doesn't exist; when I access a SoftLayer server, my bits and bytes only travel across the SoftLayer network, so we're effectively cutting out a number of uncontrollable variables in the process of running network speed tests.

For better or worse, I didn't tell any network engineers that I planned to run speed tests to every available data center and share the results I found, so you're seeing exactly what I saw with no tomfoolery. I just fired up my browser, headed to our Data Centers page, and made my way down the list using the SpeedTest option for each facility. Customers often go through this process when trying to determine the latency, speeds, and network path that they can expect from servers in each data center, but if we look at the results collectively, we can learn a lot more about network performance in general.

With the results, we'll discuss how network speed tests work, what the results mean, and why some might be surprising. If you're feeling scientific and want to run the tests yourself, you're more than welcome to do so.

The Ookla SpeedTests we link to from the data centers table measured the latency (ping time), jitter (variation in latency), download speeds, and upload speeds between the user's computer and the data center's test server. To run this experiment, I connected my MacBook Pro via Ethernet to a 100Mbps wired connection. At the end of each speed test, I took a screenshot of the performance stats:

SoftLayer Network Speed Test

To save you the trouble of trying to read all of the stats on each data center as they cycle through that animated GIF, I also put them into a table (click the data center name to see its results screenshot in a new window):

Data Center Latency (ms) Download Speed (Mbps) Upload Speed (Mbps) Jitter (ms)
AMS01 121 77.69 82.18 1
DAL01 9 93.16 87.43 0
DAL05 7 93.16 83.77 0
DAL06 7 93.11 83.50 0
DAL07 8 93.08 83.60 0
DAL09 11 93.05 82.54 0
FRA02 128 78.11 85.08 0
HKG02 184 50.75 78.93 2
HOU02 2 93.12 83.45 1
LON02 114 77.41 83.74 2
MEL01 186 63.40 78.73 1
MEX01 27 92.32 83.29 1
MON01 52 89.65 85.94 3
PAR01 127 82.40 83.38 0
SJC01 44 90.43 83.60 1
SEA01 50 90.33 83.23 2
SNG01 195 40.35 72.35 1
SYD01 196 61.04 75.82 4
TOK02 135 75.63 82.20 2
TOR01 40 90.37 82.90 1
WDC01 43 89.68 84.35 0

By performing these speed tests on the SoftLayer network, we can actually learn a lot about how speed tests work and how physical location affects network performance. But before we get into that, let's take note of a few interesting results from the table above:

  • The lowest latency from my office is to the HOU02 (Houston, Texas) data center. That data center is about 14.2 miles away as the crow flies.
  • The highest latency results from my office are to the SYD01 (Sydney, Australia) and SNG01 (Singapore) data centers. Those data centers are at least 8,600 and 10,000 miles away, respectively.
  • The fastest download speed observed is 93.16Mbps, and that number was seen from two data centers: DAL01 and DAL05.
  • The slowest download speed observed is 40.35Mbps from SNG01.
  • The fastest upload speed observed is 87.43Mbps to DAL01.
  • The slowest upload speed observed is 72.35Mbps to SNG01.
  • The upload speeds observed are faster than the download speeds from every data center outside of North America.

Are you surprised that we didn't see any results closer to 100Mbps? Is our server in Singapore underperforming? Are servers outside of North America more selfish to receive data and stingy to give it back?

Those are great questions, and they actually jumpstart an explanation of how the network tests work and what they're telling us.

Maximum Download Speed on 100Mbps Connection

If my office is 2 milliseconds from the test server in HOU02, why is my download speed only 93.12Mbps? To answer this question, we need to understand that to perform these tests, a connection is made using Transmission Control Protocol (TCP) to move the data, and TCP does a lot of work in the background. The download is broken into a number of tiny chunks called packets and sent from the sender to the receiver. TCP wants to ensure that each packet that is sent is received, so the receiver sends an acknowledgement back to the sender to confirm that the packet arrived. If the sender is unable to verify that a given packet was successfully delivered to the receiver, the sender will resend the packet.

This system is pretty simple, but in actuality, it's very dynamic. TCP wants to be as efficient as possible ... to send the fewest number of packets to get the entire message across. To accomplish this, TCP is able to modify the size of each packet to optimize it for each communication. The receiver dictates how large the packet should be by providing a receive window to accommodate a small packet size, and it analyzes and adjusts the receive window to get the largest packets possible without becoming unstable. Some operating systems are better than others when it comes to tweaking and optimizing TCP transfer rates, but the processes TCP takes to ensure that the packets are sent and received without error takes overhead, and that overhead limits the maximum speed we can achieve.

Understanding the SNG01 Results

Why did my SNG01 speed test max out at a meager 40.35Mbps on my 100Mbps connection? Well, now that we understand how TCP is working behind the scenes, we can see why our download speeds from Singapore are lower than we'd expect. Latency between the sending and successful receipt of a packet plays into TCP’s considerations of a stable connection. Higher ping times will cause TCP to send smaller packet sizes than it would for lower ping times to ensure that no sizable packet is lost (which would have to be reproduced and resent).

With our global backbone optimizing the network path of the packets between Houston and Singapore, the more than 10,000-mile journey, the nature of TCP, and my computer's TCP receive window adjustments all factor into the download speeds recorded from SNG01. Looking at the results in the context of the distance the data has to travel, our results are actually well within the expected performance.

Because the default behavior of TCP is partially to blame for the results, we could actually tweak the test and tune our configurations to deliver faster speeds. To confirm that improvements can be made relatively easily, we can actually just look at the answer to our third question...

Upload > Download?

Why are the upload speeds faster than the download speeds after latency jumps from 50ms to 114ms? Every location in North America is within 2,000 miles of Houston, while the closest location outside of North America is about 5,000 miles away. With what we've learned about how TCP and physical distance play into download speeds, that jump in distance explains why the download speeds drop from 90.33Mbps to 77.41Mbps as soon as we cross an ocean, but how can the upload speeds to Europe (and even APAC) stay on par with their North American counterparts? The only difference between our download path and upload path is which side is sending and which side is receiving. And if the receiver determines the size of the TCP receive window, the most likely culprit in the discrepancy between download and upload speeds is TCP windowing.

A Linux server is built and optimized to be a server, whereas my MacOSX laptop has a lot of other responsibilities, so it shouldn't come as a surprise that the default TCP receive window handling is better on the server side. With changes to the way my laptop handles TCP, download speeds would likely be improved significantly. Additionally, if we wanted to push the envelope even further, we might consider using a different transfer protocol to take advantage of the consistent, controlled network environment.

The Importance of Physical Location in Cloud Computing

These real-world test results under controlled conditions demonstrate the significance of data's geographic proximity to its user on the user's perceived network performance. We know that the network latency in a 14-mile trip will be lower than the latency in a 10,000-mile trip, but we often don't think about the ripple effect latency has on other network performance indicators. And this experiment actually controls a lot of other variables that can exacerbate the performance impact of geographic distance. The tests were run on a 100Mbps connection because that's a pretty common maximum port speed, but if we ran the same tests on a GigE line, the difference would be even more dramatic. Proof: HOU02 @ 1Gbps v. SNG01 @ 1Gbps

Let's apply our experiment to a real-world example: Half of our site's user base is in Paris and the other half is in Singapore. If we chose to host our cloud infrastructure exclusively from Paris, our users would see dramatically different results. Users in Paris would have sub-10ms latency while users in Singapore have about 300ms of latency. Obviously, operating cloud servers in both markets would be the best way to ensure peak performance in both locations, but what if you can only afford to provision your cloud infrastructure in one location? Where would you choose to provision that infrastructure to provide a consistent user experience for your audience in both markets?

Given what we've learned, we should probably choose a location with roughly the same latency to both markets. We can use the SoftLayer Looking Glass to see that San Jose, California (SJC01) would be a logical midpoint ... At this second, the latency between SJC and PAR on the SoftLayer backbone is 149ms, and the latency between SJC and SNG is 162ms, so both would experience very similar performance (all else being equal). Our users in the two markets won't experience mind-blowing speeds, but neither will experience mind-numbing speeds either.

The network performance implications of physical distance apply to all cloud providers, but because of the SoftLayer global network backbone, we're able to control many of the variables that lead to higher (or inconsistent) latency to and from a given data center. The longer a single provider can route traffic, the more efficiently that traffic will move. You might see the same latency speeds to another provider's cloud infrastructure from a given location at a given time across the public Internet, but you certainly won't see the same consistency from all locations at all times. SoftLayer has spent millions of dollars to build, maintain, and grow our global network backbone to transport public and private network traffic, and as a result, we feel pretty good about claiming to provide the best network performance in cloud computing.

-@khazard

March 27, 2015

Building “A Thing” at Hackster.io’s Hardware Weekend

Introduction to Hackster.io

Over the weekend in San Francisco, I attended a very cool hackathon put together by the good folks at Hackster.io. Hackster.io’s Hardware Weekend is a series of hackathons all over the country designed to bring together people with a passion for building things, give them access to industry mentors, and see what fun and exciting things they come up with in two days. The registration desk was filled with all kinds of hardware modules to be used for whatever project you could dream up—from Intel Edison boards, the Grove Starter Kit, a few other things that I have no idea what they did, and of course, plenty of stickers.

After a delicious breakfast, we heard a variety of potential product pitches by the attendees, then everyone split off into groups to support their favorite ideas and turn them into a reality.

When not hard at work coding, soldering, or wiring up devices, the attendees heard talks from a variety of industry leaders, who shared their struggles and what worked for their products. The founder of spark.io gave a great talk on how his company began and where it is today.

Building a thing!
After lunch, Phil Jackson, SoftLayer’s lead technology evangelist, gave an eloquent crash course in SoftLayer and how to get your new thing onto the Internet of Things. Phil and I have a long history in Web development, so we provided answers to many questions on that subject. But when it comes to hardware, we are fairly green. So when we weren't helping teams get into the cloud, we tried our hand at building something ourselves.

We started off with some of the hardware handouts: an Edison board and the Grove Starter Kit. We wanted to complete a project that worked in the same time the rest of the teams had—and showed off some of the power of SoftLayer, too. Our idea was to use the Grove Kit’s heat sensor, display it on the LCD, and post the result to a IBM Cloudant database, which would then be displayed on a SoftLayer server as a live updating graph.

The first day consisted mostly of Googling variations on “Edison getting started,” “read Grove heat sensor,” “write to LCD”, etc. We started off simply, by trying to make an LED blink, which was pretty easy. Making the LED STOP blinking, however, was a bit more challenging. But we eventually figured out how to stop a program from running. We had a lot of trouble getting our project to work in Python, so we eventually admitted defeat and switched to writing node.js code, which was significantly easier (mostly because everything we needed was on stackoverflow).

After we got the general idea of how these little boards worked, our project came together very quickly at the end of Day 2—and not a moment too soon. The second I shouted, “IT WORKS!” it was time for presentations—and for us to give out the lot of Raspberry Pi we brought to some lucky winners.

And, without further ado, we present to you … the winners!

BiffShocker

This team wanted to mod out the Hackster’s DeLorean time machine to prevent Biff (or anyone else) from taking it out for a spin. They used a variety of sensors to monitor the DeLorean for any unusual or unauthorized activity, and if all else failed, were prepared to administer a deadly voltage through the steering wheel (represented by harmless LEDs in the demo) to stop the interloper from stealing their time machine. The team has a wonderful write up of the sensors they used, along with the products used to bring everything together.

This was a very energetic team who we hope will use their new Raspberry Pis to keep the space-time continuum clear.

KegTime

The KegTime project aimed to make us all more responsible drinkers by using an RFID reader to measure alcohol consumption and call Uber for you when you have had enough. They used a SoftLayer server to host all the drinking data, and used it to interact with Uber’s API to call a ride at the appropriate moment. Their demo included a working (and filled) keg with a pretty fancy LED-laden tap, which was very impressive. In recognition of their efforts to make us all more responsible drinkers, we awarded them five Raspberry Pis so they can continue to build cool projects to make the world a better place.

The Future of Hackster.io
Although this is the end of the event in San Francisco, there are many more Hackster.io events coming up in the near future. I will be going to Phoenix next on March 28 and look forward to all the new projects inventors come up with.

Be happy and keep hacking!

-Chris

Categories: 
March 25, 2015

Introducing New Block Storage and File Storage

Everyone knows data growth is exploding. The chart below illustrates data growth—in zettabytes—over the last 11 years.

Storing all that data can get complicated. The rise of cloud computing and virtualization has led to myriad options for data storage. Kevin Trachier did a great job of defining and highlighting the differences in various cloud storage options in his blog post, Which storage solution is best for your project?

Today, I’m excited to announce that we’ve expanded SoftLayer’s cloud storage portfolio to include two new storage products: block storage and file storage, both featuring Performance and Endurance options. These storage offerings allow you to create storage volumes or shares and connect them to your bare metal or virtual servers using either NFS or iSCSI connectivity.

The Endurance and Performance classes of both block storage and file storage feature:

  • Storage sizes to fit any application—from 20GB to 12TB
  • Highly available connectivity—redundant networking connections reduce risk and mitigate against unplanned events to provide business continuity
  • Allocated IOPS—meet any workload requirement through customizable levels of IOPS that are there when you need them
  • Durable and Resilient —infrastructure provides safety of mind against data loss without managing system-level RAID arrays
  • Concurrent Access—multiple hosts can simultaneously access both block and file volumes in support of advanced use cases such as clustered databases

The Endurance class of both block storage and file storage is available in three tiers, allowing you can choose the right balance of performance and cost for your needs:

  • 0.25 IOPS per GB is designed for workloads with low I/O intensity. Example applications include storing mailboxes or departmental level file shares.
  • 2 IOPS per GB is designed for most general purpose use. Example applications include hosting small databases backing Web applications or virtual machine disk images for a hypervisor.
  • 4 IOPS per GB is designed for higher intensity workloads. Example applications include transactional and other performance-sensitive databases.

All Endurance tiers support snapshots and replication to remote data centers.

We designed the Performance class of both block storage and file storage to support high I/O applications like relational databases that require consistent levels of performance. Block volumes and file shares can be provisioned with up to 6,000 IOPS and 96MB/s of throughput.

Available sizes and IOPS combinations:

Block storage and file storage are available in SoftLayer data centers worldwide. SoftLayer customers can log in to the customer portal and start using them today.

-Michael

March 18, 2015

SoftLayer, Bluemix and OpenStack: A Powerful Combination

Building and deploying applications on SoftLayer with Bluemix, IBM’s Platform as a Service (PaaS), just got a whole lot more powerful. At IBM’s Interconnect, we announced a beta service for deploying OpenStack-based virtual servers within Bluemix. Obviously, the new service is exciting because it brings together the scalable, secure, high-performance infrastructure from SoftLayer with the open, standards-based cloud management platform of OpenStack. But making the new service available via Bluemix presents a particularly unique set of opportunities.

Now Bluemix developers can deploy OpenStack-based virtual servers on SoftLayer or their own private OpenStack cloud in a consistent, developer-friendly manner. Without changing your code, your configuration, or your deployment method, you can launch your application to a local OpenStack cloud on your premises, a private OpenStack cloud you have deployed on SoftLayer bare metal servers, or to SoftLayer virtual servers within Bluemix. For instance, you could instantly fire up a few OpenStack-based virtual servers on SoftLayer to test out your new application. After you have impressed your clients and fully tested everything, you could deploy that application to a local OpenStack cloud in your own data center ̶all from within Bluemix. With Bluemix providing the ability to deploy applications across cloud deployment models, developers can create an infrastructure configuration once and deploy consistently, regardless of the stage of their application development life cycle.

OpenStack-based virtual servers on SoftLayer enable you to manage all of your virtual servers through standard OpenStack APIs and user interfaces, and leverage the tooling, knowledge and process you or your organization have already built out. So the choice is yours: you may fully manage your virtual servers directly from within the Bluemix user interface or choose standard OpenStack interface options such as the Horizon management portal, the OpenStack API or the OpenStack command line interface. For clients who are looking for enterprise-class infrastructure as a service but also wish to avoid getting locked in a vendor’s proprietary interface, our new OpenStack standard access provides clients a new choice.

Providing OpenStack-based virtual servers is just one more (albeit major) step toward our goal of providing even more OpenStack integration with SoftLayer services. For clients looking for enterprise-class Infrastructure as a Service (IaaS) available globally and accessible via standard OpenStack interfaces, OpenStack-based virtual servers on SoftLayer provide just what they are looking for.

The beta is open now for you to test deploying and running servers on the new SoftLayer OpenStack public cloud service through Bluemix. You can sign up for a Bluemix 30-day free trial.

- @marcalanjones

March 6, 2015

The SLayer Standard Vol. 1 No. 7: the IBM InterConnect Edition

Last week, an estimated 21,000 IBMers, SLayers, customers and partners from around the world flooded Las Vegas, Nev. to attend the first-ever IBM InterConnect. This new conference combined three popular IBM conferences (Impact, Innovate and Pulse) into a single, premier cloud and mobile techno-topia.

What our engineers and developers did in Las Vegas after conference hours might have stayed in Las Vegas, but IBM’s InterConnect hits and announcements didn’t. Here’s a recap:

Speed to Market Wins the Cloud Computing Race
Everyone likes to go fast, and the new senior vice president for IBM Cloud, Robert LeBlanc, likes to go super-fast. “What I’m focusing on is speed,” LeBlanc says.

In this blink-and-the-market-changes world, time-to-market determines the winners and losers in cloud computing. Part of LeBlanc’s strategy is opening new SoftLayer datacenters. If you haven’t heard the news, SoftLayer will be launching Sydney and Montreal data centers in the next 30 days — with more coming soon. Stay tuned for more locations.

Read more on how LeBlanc plans to win the cloud business race.

Cloudy skies on the horizon—that’s a good thing!
Our CEO, Ginni Rometty, announced a $4 billion investment on cloud services (shared with the data analytics and mobile businesses). She’s hoping that the investment will spur $40 billion a year in revenue come 2018.

Signs of the investment could be seen as execs at InterConnect announced new hybrid services coming in 2015, including enterprise containers. [What’s a container? Read our blog post.]

In fact, hybrid was a big theme at InterConnect. “We are going to make all those clouds act like one,” says Angel Diaz, vice president of IBM cloud technologies. IBM cloud (powered by SoftLayer) will be a one-stop shop: a cloud superstore with a smorgasbord of aaS offerings.

It looks like it’ll be an exciting ride for IBM over the next couple of years. Make sure to keep up with the headlines for more announcements in the coming months.

-JRL

Categories: 
March 4, 2015

Docker: Containerization for Software

Before modern-day shipping, packing and transporting different shaped boxes and other oddly shaped items from ships to trucks to warehouses was difficult, inefficient, and cumbersome. That was until the modern day shipping container was introduced to the industry. These containers could easily be stacked and organized onto a cargo ship then easily transferred to a truck where it would be sent on to its final destination. Solomon Hykes, Docker founder and CTO, likens the Docker to the modern-day shipping industry’s solution for shipping goods. Docker utilizes containerization for shipping software.

Docker, an open platform for distributed applications used by developers and system administrators, leverages standard Linux container technologies and some git-inspired image management technology. Users can create containers that have everything they need to run an application just like a virtual server but are much lighter to deploy and manage. Each container has all the binaries it needs including library and middleware, configuration, and activation process. The containers can be moved around [like containers on ships] and executed in any Docker-enabled server.

Container images are built and maintained using deltas, which can be used by several other images. Sharing reduces the overall size and allows for easy image storage in Docker registries [like containers on ships]. Any user with access to the registry can download the image and activate it on any server with a couple of commands. Some organizations have development teams that build the images, which are run by their operations teams.

Docker & SoftLayer

The lightweight containers can be used on both virtual servers and bare metal servers, making Docker a nice fit with a SoftLayer offering. You get all the flexibility of a re-imaged server without the downtime. You can create red-black deployments, and mix hourly and monthly servers, both virtual and bare metal.

While many people share images on the public Docker registry, security-minded organizations will want to create a private registry by leveraging SoftLayer object storage. You can create Docker images for a private registry that will store all its information with object storage. Registries are then easy to create and move to new hosts or between data centers.

Creating a Private Docker Registry on SoftLayer

Use the following information to create a private registry that stores data with SoftLayer object storage. [All the commands below were executed on an Ubuntu 14.04 virtual server on SoftLayer.]

Optional setup step: Change Docker backend storage AuFS

Docker has several options for an image storage backend. The default backend is DeviceMapper. The option was not very stable during the test, failing to start and export images. This step may not be necessary in your specific build depending on updates of the operating system or Docker itself. The solution was to move to Another Union File System (AuFS).
  1. Install the following package to enable AuFS:
    apt-get install linux-image-extra-3.13.0-36-generic
  2. Edit /etc/init/docker.conf, and add the following line or argument:
    DOCKER_OPTS="--storage-driver=aufs"
  3. Restart Docker, and check if the backend was changed:
    service docker restart
    docker info

The command should indicate AuFS is being used. The output should look similar to the following:
Containers: 2
Images: 29
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Dirs: 33
Execution Driver: native-0.2
Kernel Version: 3.13.0-36-generic
WARNING: No swap limit support

Step 1: Create image repo

  1. Create the directory registry-os in a work directory.
  2. Create a file named Dockerfile in the registry-os directory. It should contain the following code:
    # start from a registry release known to work
    FROM registry:0.7.3
    # get the swift driver for the registry
    RUN pip install docker-registry-driver-swift==0.0.1
    # SoftLayer uses v1 auth and the sample config doesn't have an option
    # for it so inject one
    RUN sed -i '91i\ swift_auth_version: _env:OS_AUTH_VERSION' /docker-registry/config/config_sample.yml
  3. Execute the following command from the directory that contains the registry-os directory to build the registry container:
    docker build -t registry-swift:0.7.3 registry-os

Step 2: Start it with your object storage credential

The credentials and container on the object storage must be provided in order to start the registry image. The standard Docker way of doing this is to pass the credentials as environment variables.

docker run -it -d -e SETTINGS_FLAVOR=swift -e
OS_AUTH_URL='https://dal05.objectstorage.service.network
layer.com/auth/v1.0
' -e OS_AUTH_VERSION=1 -e
OS_USERNAME='' -e
OS_PASSWORD='' -e
OS_CONTAINER='docker' -e GUNICORN_WORKERS=8 -p
127.0.0.1:5000:5000 registry-swift:0.7.3

This example assumes we are storing images in DAL05 on a container called docker. API_USER and API_KEY are the object storage credentials you can obtain from the portal.

Step 3: Push image

An image needs to be pushed to the registry to make sure everything works. The image push involves two steps: tagging an image and pushing it to the registry.
docker tag registry-swift:0.7.3 localhost:5000/registry-swift

docker push localhost:5000/registry-swift


You can ensure that it worked by inspecting the contents of the container in the object storage.

Step 4: Get image

The image can be downloaded once successfully pushed to object storage via the registry by issuing the following command:
docker pull localhost:5000/registry-swift

Images can be downloaded from other servers by replacing localhost with the IP address to the registry server.

Final Considerations

The Docker container can be pushed throughout your infrastructure once you have created your private registry. Failure of the machine that contains the registry can be quickly mitigated by restarting the image on another node. To restart the image, make sure it’s on more than one node in the registry allowing you to leverage the SoftLayer platform and the high durability of object storage.

If you haven’t explored Docker, visit their site, and review the use cases.

-Thomas

February 20, 2015

Create and Deliver Marketing or Transactional Emails

The SoftLayer email delivery service is a highly scalable, cloud-based, email relay solution. In partnership with SendGrid, an email as a service provider, SoftLayer customers are able to create and deliver marketing or transactional emails via the customer portal or SendGrid APIs.

The SoftLayer email delivery service isn’t a full corporate email solution. It’s intended as a simplified method for delivering digital marketing (e.g., newsletters and coupons) and transactional content (e.g., order confirmation, shipping notice, and password reset) to customers.

Architecture

Traditionally, email is first sent through an outbound mail server that’s configured and maintained in-house, which is often costly and difficult to maintain.

With the SoftLayer email delivery service, the process is simplified; the only requirement is a connection to the Internet.

Package Comparison

The following table lists the service levels available to SoftLayer customers. The Free and Basic tiers are suitable for smaller applications with lower volume requirements. The Advanced and Enterprise levels are more suitable for larger applications and customers that require enhanced monitoring and other advanced features. Note that marketing emails are only available in the Advanced and Enterprise tiers.

Getting Started

Use the following steps to sign up for the SoftLayer email delivery service.

  1. Log on to the customer portal.
  2. Click Services, Email Delivery.
  3. Click the Order Email Delivery Service link at the top of the page.
  4. Choose your desired package, and fill out the required information. Remember for marketing emails, you must select either the Advanced or Enterprise packages.

Configuring a Marketing Email

Most of your interaction will be through the vendor portal provided by SendGrid. The following steps outline how to compose and deliver a marketing email to a list of subscribers.

  1. From the SoftLayer customer portal, navigate to Services, Email Delivery Service and click Actions, Access Vendor Portal for your desired account.
  2. Once in the SendGrid portal, click the Marketing Email link.

  1. You’ll be taken to the Marketing Email Dashboard. Click the Create a Sender Address button.
  2. Fill in the required information and click Save.
  3. Navigate back to the Marketing Email Dashboard, and click the Create Recipient List button.
  4. Enter a name for the list in the List Name field. Be sure that it’s something meaningful, such as Residential Customers.

  1. You can either Upload a list of contact emails or Add recipients manually. When adding the recipients manually, you’ll be asked verify the addresses that you enter. Click the Save button when done entering addresses.

  1. Navigate back to the Marketing Email Dashboard and click the Create Marketing Email button.
  2. Enter the title of the email in the Marketing Email Title field. Under Pick a Sender Address, select either a list or select recipients for the email. Choose your content type and how to send the email. Split Test my Marketing Email, under Choose how to send your Marketing Email, is an advanced feature that lets you send different recipients different versions of the same email—sending the different versions helps determine which version is most effective.

  1. Select the list of recipients to whom the email is to be sent and click Save.

  1. Next, select the template for the email. Options include Basic, Design, and My Saved Templates.

  1. Enter your email content. Make sure to provide a message subject.
  2. Review your email, and select when you would like it sent—Send Now, based on a Schedule, or Save As Draft. Click Finish when you’re done, or Save & Exit for a draft.

  1. You will then be brought back to the Marketing Email Dashboard where you can monitor the results of your email campaign.

Setting Up a Transactional Email

The following example shows how to integrate your app with SendGrid to send new users a welcome email. This example makes use of the SendGrid template engine, although it’s not required.

  1. From the SendGrid portal, click the Template Engine button.
  2. Click the Create Template button, enter the Template Name, and click Save.

  1. Design and modify your email and click Save when finished.

  1. Your new template should now be Active and ready to be used by the API.
  2. Click the Apps link in the top navigation bar.

  1. Click the Template Engine link on the right side of the screen.

  1. Take note of the ID of the template you just created.

  1. Use the curl utility to test your email via the SendGrid Web API.
  2. Execute the following to send a test email using your new template.


curl -d 'to=&subject="Test
subject"&text="Test Body"&from=&api_user=;api_key=
&x-smtpapi={"filters":{"templates":{"settings":{"enable":1,"template_id":
"6770c11f-97d5-4be9-8811-c86525799ec9"}}}}' https://api.sendgrid.com/api/mail.send.json

For more information on how the SoftLayer email delivery service can help you get back to your core business, check out this blog post.

-Sean

Worldwide Channel Solutions Architect for SoftLayer, an IBM Company

Subscribe to softlayer