Cloud Posts

May 27, 2016

Data Security and Encryption in the Cloud

In Wikipedia’s words, encryption is the process of encoding messages or information in such a way that only authorized parties can read it. On a daily basis, I meet customers from various verticals. Whether it is health care, finance, government, technology, or any other public or privately held entity, they all have specific data security requirements. More importantly, the thought of moving to a public cloud brings its own set of challenges around data security. In fact, data security is the biggest hurdle when making the move from a traditional on-premises data center to a public cloud.

One of the ways to protect your data is by encryption. There are a few ways to encrypt data, and they all have their pros and cons. By the end of this post, you will hopefully have a better understanding of the options available to you and how to choose one that meets your data security requirements.

Data “At Rest” Encryption

At rest encryption refers to the encryption of data that is not moving. This data is usually stored on hardware such as local disk, SAN, NAS, or other portable storage devices. Regardless of how the data gets there, as long as it remains on that device and is not transferred or transmitted over a network, it is considered at rest data.

There are different methodologies to encrypt at rest data. Let’s look at the few most common ones:

Disk Encryption: This is a method where all data on a particular physical disk is encrypted. This can be done by using SED (self-encrypting disk) or using a third party solutions from vendors like Vormetric, SafeNet, PrimeFactors, and more. In a public cloud environment, your data will most likely be hosted on a multitenant SAN infrastructure, so key management and the public cloud vendor’s ability to offer dedicated, local, or SAN spindles becomes critical. Moreover, keep in mind that using this encryption methodology does not protect data when it leaves the disk. This method may also be more expensive and may add management overhead. On the other hand, disk encryption solutions are mostly operating system agnostic, allowing for more flexibility.

File Level Encryption: File level encryption is usually implemented by running a third-party application within the operating system to encrypt files and folders. In many cases, these solutions create a virtual or a logical disk where all files and folders residing in it are encrypted. Tools like VeraCrypt (TrueCrypt’s successor), BitLocker, and 7-Zip are a few examples of file encryption software. These are very easy to implement and support all major operating systems.  

Data “In Flight” Encryption

Encrypting data in flight involves encrypting the data stream at one point and decrypting it at another point. For example, if you replicate data across two data centers and want to ensure confidentiality of this exchange, you would use data in flight encryption to encrypt the data stream as it leaves the primary data center, then decrypt it at the other end of the cable at the secondary data center. Since the data exchange is very brief, the keys used to encrypt the frames or packets are no longer needed after the data is decrypted at the other end so they are discarded—no need to manage these keys. Most common protocols used for in flight data encryption are IPsec VPN and TLS/SSL.

And there you have it. Hopefully by now you have a good understanding of the most commonly encryption options available to you. Just keep in mind that more often than not, at rest and in flight encryption are implemented in conjunction and complement each other. When choosing the right methodology, it is critical to understand the use case, application, and compliance requirements. You would also want to make sure that the software or the technology you chose adheres to the highest level of encryption standards, such as 3DES, RSA, AES, Blowfish, etc.

-Zeb Ahmed

May 5, 2016

Everything you need to know about IBM POWER8 on SoftLayer

SoftLayer provides industry-leading cloud Infrastructure as a Service from a growing number of data centers around the world. To enable clients to draw critical insights and make better decisions faster, now there’s even more good news—customers and partners can use and rely on the secure, flexible, and open platform of IBM POWER Systems, which have just become available in SoftLayer’s DAL09 data center.

POWER8 servers are built with a processor designed and optimized specifically for big data workloads combining compute power, cutting-edge memory bandwidth, and I/O in ways that result in increased levels of performance, resiliency, availability, and security.

IBM POWER systems were designed to run many of the most demanding enterprise applications, industry-specific solutions, relational database management systems, and high performance computing environments. POWER8 servers are an ideal system for Linux and support a vast ecosystem of OpenSource, ISV, and IBM SW Unit products, giving clients a single, industry-leading open architecture (IBM POWER) in which to store, retrieve, and derive value from the “gold mine” of next generation applications.

The new POWER8 servers available from SoftLayer offer an optimal hybrid cloud infrastructure to test new Linux workloads in a secure and isolated cloud environment with reduced risk. As clients explore newer use cases like advanced analytics, machine learning, and cognitive computing against the combination of vast amounts of both structured and unstructured data, POWER8 and SoftLayer are in a unique position to accelerate client value. This new offering will also continue to leverage the rapidly expanding community of developers contributing to the OpenPOWER ecosystem as well as thousands of independent software vendors that support Linux on Power applications.

With the explosive growth of both structured and unstructured data, it requires businesses to derive insights and change faster than ever to keep pace. The cloud enables you to do just that. Our new and unique solution pairs SoftLayer’s Network-Within-a-Network topology for true out-of-band access, an easy-to-use customer portal, and robust APIs for full remote access of all product and service management options—with the unique high performance technology from IBM POWER8 to help accelerate the creation and delivery of the next generation of IT solutions.    

For more details, visit our POWER8 servers page.

 

-Chuck Calio,  IBM Power Systems Growth Solution Specialist

Categories: 
Keywords:
Categories:
May 3, 2016

Make the most of Watson Language Translation on Bluemix

How many languages can you speak (sorry, fellow geeks; I mean human languages, not programming)?

Every day people across the globe depend more and more on the Internet for their day-to-day activities, increasing the need for software to support multiple languages to accommodate the growing diversity of its users. If you work developing software, this means it is only a matter of time before you get tasked to translate your applications.

Wouldn't it be great if you could learn something with just a few key strokes? Just like Neo in The Matrix when he learns kung fu. Well, wish no more! I'll show you how to teach your applications to speak in multiple languages with just a few key strokes using Watson’s Language Translation service, available through Bluemix. It provides on-the-fly translation between many languages. You pay only for what you use and it’s consumable through web services, which means pretty much any application can connect to it—and it's platform and technology agnostic!

I'll show you how easy it is to create a PHP program with language translation capabilities using Watson's service.

Step 1: The client.

You can write your own code to interact with Watson’s Translation API, but why should you? The work is already done for you. You can pull in the client via Composer, the de-facto dependency manager for PHP. Make sure you have Composer installed, then create a composer.json file with the following contents:

composer.json file



We will now ask Composer to install our dependency. Execute one of the following commands from your CLI:



Installing the dependency



After the command finishes, you should have a 'vendor' directory created.

 

Step 2: The credentials.

From Bluemix, add the Language Translation service to your application and retrieve its credentials from the application's dashboard (shown below).



From Bluemix, add the Language Translation service to your application and retrieve its credentials from the application's dashboard.



 

Step 3: Put everything together.

At the same level where the composer.json file was created in Step 1, create a PHP file named test.php with the following contents:

test.php file





Save the file, buckle up, and execute it from the command line:

Execute test.php

 

Voilà! Your application now speaks French!

Explore other languages Watson knows and other cool features available through Watson's Language Translation service.

 

-Sergio







 

April 26, 2016

Cloud. Ready-to-Wear.

It’s been five years since I started my journey with SoftLayer. And what a journey it has been—from being one of the first few folks in our Amsterdam office, to becoming part of the mega-family of IBMers; from one data center in Europe to six on this side of the pond and 40+ around the globe; from “Who is SoftLayer?” (or my favorite, “SoftPlayer”), to becoming a cloud environment fundamental for some of the biggest and boldest organizations worldwide.

But the most thrilling difference between 2016 and 2011 that I’ve been observing lately is a shift of the market’s perception of cloud, which matters are important to adopters, and the technology itself becoming mainstream.

Organizations of all sizes—small, medium, and large, while still raising valid questions around the level of control and security—are more often talking about challenges regarding managing the combined on-prem and shared environments, readiness of their legacy applications to migrate to cloud, and their staff competency to orchestrate the new architecture.

At Cloud Expo 2016 (the fifth one for the SoftLayer EMEA team), next to two tremendous keynotes given by Sebastian Krause, General Manager IBM Cloud Europe, and by Rashik Parmar, Lead IBM Cloud Advisor/Europe IBM Distinguished Engineer, we held a roundtable to discuss the connection between hybrid cloud and agile business. Moderated by Rashik Parmar, the discussion confirmed the market’s evolution: from recognizing cloud as technology still proving its value, to technology critical in gaining a competitive advantage in today’s dynamic economy.

Rashik’s guests had deep technology backgrounds and came from organizations of all sizes and flavors—banking, supply chain managements, ISV, publishing, manufacturing, MSP, insurance, and digital entertainment, to name a few. Most of them already have live cloud deployments, or they have one ready to go into production this year.

When it came to the core factors underlying a move into the cloud, they unanimously listed gaining business agility and faster time-to-market. For a few minutes, there was a lively conversation among the panelists about the cost and savings. They raised examples citing  poorly planned cloud implementations that were 20-30 percent more costly than keeping the legacy IT setup. Based on an example of a large Australian bank, Rashik urged companies to start the process of moving into cloud with a vigilant map of their own applications landscape before thinking about remodeling the architecture to accommodate cloud.

The next questions the panelists tackled pertained to the drivers behind building hybrid cloud environments, which included:

  • Starting with some workloads and building a business case based on their success; from there, expanding the solution organization-wide
  • Increasing the speed of market entry for new solutions and products
  • Retiring certain legacy applications on-prem, while deploying new ones on cloud
  • Regulatory requirements that demand some workloads or data to remain on-prem.

When asked to define “hybrid cloud,” Rashik addressed the highly ambiguous term by simply stating that it refers to any combination of software-defined environment and automation with traditional IT.

The delegates discussed the types of cloud—local, dedicated, and shared—and found it difficult to define who controls hybrid cloud, and who is accountable for what component when something goes wrong. There was a general agreement that many organizations still put physical security over the digital one, which is not entirely applicable in the world of cloud.

Rashik explored, from his experience, where most cases of migrating into cloud usually originate. He referred to usage patterns and how organizations become agile with hybrid IT. The delegates agreed that gaining an option of immediate burstability and removing the headache of optimal resource management, from hardware to internal talent, are especially important.

Rashik then addressed the inhibitors of moving into cloud—and here’s the part that inspired me to write this post. While mentions of security (data security and job security) and the control over the environment arose, the focus repeatedly shifted toward the challenges of applications being incompatible with cloud architecture, complicated applications landscape, and scarcity of IT professionals skilled in managing complex (hybrid) cloud environments.

This is a visible trend that demonstrates the market has left the cloud department store’s changing room, and ready not only to make the purchase, but “ready to wear” the new technology with a clear plan where, when, and with an aim to achieve specific outcomes.

The conversation ended with energizing insights about API-driven innovation that enables developers to assemble a wide spectrum of functions, as opposed to being “just a coder.” Other topics included cognitive computing that bridges digital business with digital intelligence, and platforms such as blockchain that are gaining momentum.

To think that not so long ago, I had to explain to the average Cloud Expo delegate what “IaaS” stand for. We’ve come a long way.

 

-Michalina

April 4, 2016

A deeper dive into using VMware on SoftLayer

 

IBM and VMware recently announced an expanded global strategic partnership that enables customers to operate a seamless and consistent cloud, spanning hybrid environments. VMware customers now have the ability to quickly provision new (or scale existing) VMware workloads to IBM Cloud. This helps companies retain the value of their existing VMware-based solutions while leveraging the growing footprint of IBM Cloud data centers worldwide.

IBM customers are now able to purchase VMware software in a flexible, cost-efficient manner to power their deployments on IBM’s bare metal hardware infrastructure service. They’ll also be able to take advantage of their existing skill sets, tools, and technologies versus having to purchase and learn new ones. New customers will have complete control of their VMware environment, allowing them to expand into new markets and reduce startup cost by leveraging SoftLayer’s worldwide network and data centers.

This new offering also allows customers access to the full stack of VMware products to build an end-to-end VMware solution that matches their current on-premises environment or create a new one. Leveraging NSX lets customers manage their SoftLayer network infrastructure and extend their on-premises environment into SoftLayer as well, letting them expand their current capacity while reducing startup capital.

Customers can currently purchase vSphere Enterprise Plus 6.0 from SoftLayer. The VMware software components in Table 1 will be available, for a la carte purchase, for individual SoftLayer bare metal servers by Q2 2016. All products listed will be billed on a per socket basis.

Table 1: VMware software components

Product Name

Version

Charge per

VMware vRealize Operations Enterprise Edition

6.0

CPU

VMware vRealize Operations Advanced Edition

6.0

CPU

VMware vRealize Operations Standard Edition

6.0

CPU

VMware vRealize Log Insight

3.0

CPU

VMware NSX-V

6.2

CPU

VMware Integrated OpenStack (VIO)

2.0

CPU

Virtual SAN Standard Tier I (0-20TB)

6.X

CPU

Virtual SAN Standard Tier II (21-64TB)

6.X

CPU

Virtual SAN Standard Tier III (65-124TB)

6.X

CPU

VMware Site Recovery Manager

6.1

CPU

VMware vRealize Automation Enterprise

6.X

CPU

VMware vRealize Automation Advanced

6.X

CPU

 

The following FAQs will help you better understand the IBM and VMware partnership:                                                                                                                                             

Q: What are you offering today? And how much does it cost?

A: Today, IBM offers vSphere Enterprise Plus 6.0, which includes vCenter and vCloud Connector. It’s currently available for $85 per CPU for single CPU, dual CPU, and quad CPU servers. The products listed in Table 1 will be available in Q2 2016.

Q: Is per-CPU pricing a change from how VMware software was offered before?

A: Yes, the CPU-based pricing is new, and is unique to IBM Cloud. IBM is currently the only cloud provider to offer this type of pricing for VMware software. CPU-based pricing allows customers to more accurately budget how much they spend for VMware software in the cloud.

Q: Can customers bring the licenses they already own and have acquired via an existing VMware license agreement (e.g., ELA)?

A: Customers can take advantage of the new pricing when purchasing the VMware software through the SoftLayer portal. Please contact your VMware sales representative to get approval if you plan on bringing the license you already own to IBM Cloud.

Q: Will you offer migration services?

A: Yes, migration services will be among the portfolio of managed services offerings we will make available. Details will be announced closer to the time of availability, which is later in 2016.

Q: What storage options are available for VMware environments on SoftLayer?

A: Customers can select from a diverse range of SoftLayer storage offerings and custom solutions depending on their requirements and preferences. Use the Select a Storage Option to use with VMware guide to determine the best storage option for your environment.

Q: Where can I find technical resources to learn more about VMware on SoftLayer?

A: There is extensive technical documentation available on KnowledgeLayer, including:

 

-Kerry Staples and Andreas Groth

February 2, 2016

The SLayer Standard Vol. 2, No. 4

The week in review. All the IBM Cloud and SoftLayer headlines in one place.

What does Marc Jones have to say about SoftLayer?

Our CTO Marc Jones sat down for an interview with Angel Diaz, IBM VP Cloud Technology & Architecture, host of IBM Cloud Dragon Dojo Series. Marc discusses his start at SoftLayer, the benefits of the SoftLayer cloud platform, dark fiber matter, and the importance of global reach. Instead of telling you what he said, you can watch it. 

Find a bit more about it here

IBM Watson business gets a new general manager.

IBM’s acquisition of the Weather Company is now complete, and that means a few changes are afoot. First, all of the Weather Company’s workloads are now running in IBM Cloud data centers. And second, David Kenny, who was the Weather Company CEO, is now in charge of Watson business.

In his new role, Kenny says his primary objective is to make Watson an even more robust platform and a leader in cognitive computing. In TechCrunch, he noted that the weather platform is not just about weather data. The massive amount of data that The Weather Channel takes in is used across various industries to help both companies and consumers make well-educated choices. All of this data will also be a boon to Watson as IBM continues to grow the AI platform with the Weather Company’s data sets.

“Obviously we ingest more weather data than others and process it in the cloud for pilots, insurers or farmers or ordinary citizens to make better informed decisions. But that platform can be reused for other unstructured data sets… this will be helpful for IBM in other business areas. What we have figured out at the Weather Company, and IBM will continue to explore across more IoT applications, is how to take data from lots of places and turn that into decisions to help make things work,” Kenny said.

Find out more about it here.

-Rachel  

Categories: 
January 29, 2016

Cloud, Interrupted: The Official SoftLayer Podcast, Episode 3

You’re never going to believe this. You already know the second episode of Cloud, Interrupted—the one, the only, the official SoftLayer podcast—hit the streets in December. And now, coming in hot, we’re bringing you the long-awaited third episode of Cloud, Interrupted—only a month after the last one! Contain your excitement. We’re getting good at this.

In the third episode of our authoritative, esteemed podcast, we discuss why our first podcasts were recorded in wind tunnels, we pat ourselves on the back for being doers and not scholars, and we reveal the humble, testosterone-fueled origins of the iconic Server Challenge.

Join Kevin Hazard, director of digital content, Phil Jackson, lead technology evangelist, and Teddy Vandenberg, manager of network provisioning, as they wreak havoc interrupting the world of cloud. Yet again.

You skipped that fluff-filled intro, didn’t you? We’ll reward your impatience with the CliffsNotes:

Cloud, Interrupted, Episode 3: In the end, you’ve gotta start somewhere.

  • [00:00:01] Yo yo yo, it’s the new and improved bleep bloops!
  • [00:00:25] We've finally stopped recording Cloud, Interrupted from our pillow forts. Now we just follow the mountains and valleys.
  • [00:04:23] So you want to host your own podcast? Cool. Take it from us on the ultimate, definitive, pretty-much-only guide to success: gear, software, and magical editing.
  • [00:06:24] Teddy takes us on a boring tangent about startups that’s not really a tangent at all. (You decide if it’s boring.)
  • [00:07:25] Ha ha, Kevin totally used to trick out his MySpace page.
  • [00:09:16] GOOD JOB, PHIL!
  • [00:09:26] Phil was THE most popular kid in school. That's how he started programming.
  • [00:13:40] There are two types of technical people: those that do and those that read the docs. Teddy doesn't read the docs. Ask him about YUM.
  • [00:17:59] C'mon, Kevin. No one wants to build a server at a conference for fun. What a dumb idea!

Oh Phil, Phil, Phil. Little did you know very how wrong you were. (Must’ve been the ponytail.)

- Fayza

December 28, 2015

Semantics: "Public," "Private," and "Hybrid" in Cloud Computing, Part II

Welcome back! In the second post in this two-part series, we’ll look at the third definition of “public” and “private,” and we’ll have that broader discussion about “hybrid”—and we’ll figure out where we go after the dust has cleared on the semantics. If you missed the first part of our series, take a moment to get up to speed here before you dive in.

Definition 3—Control: Bare Metal v. Virtual

A third school of thought in the “public v. private” conversation is actually an extension of Definition 2, but with an important distinction. In order for infrastructure to be “private,” no one else (not even the infrastructure provider) can have access to a given hardware node.

In Definition 2, a hardware node provisioned for single-tenancy would be considered private. That single-tenant environment could provide customers with control of the server at the bare metal level—or it could provide control at the operating system level on top of a provider-managed hypervisor. In Definition 3, the latter example would not be considered “private” because the infrastructure provider has some level of control over the server in the form of the virtualization hypervisor.

Under Definition 3, infrastructure provisioned with full control over bare metal hardware is “private,” while any provider-virtualized or shared environment would be considered “public.” With complete, uninterrupted control down to the bare metal, a user can monitor all access and activity on the infrastructure and secure it from any third-party usage.

Defining “public cloud” and “private cloud” using the bare metal versus virtual delineation is easy. If a user orders infrastructure resources from a provider, and those resources are delivered from a shared, virtualized environment, that infrastructure would be considered public cloud. If the user orders a number of bare metal servers and chooses to install and maintain his or her own virtualization layer across those bare metal servers, that environment would be a private cloud.

“Hybrid”

Mix and Match

Now that we see the different meanings “public” and “private” can have in cloud computing, the idea of a “hybrid” environment is a lot less confusing. In actuality, it really only has one definition: A hybrid environment is a combination of any variation of public and private infrastructure.

Using bare metal servers for your database and virtual servers for your Web tier? That’s a hybrid approach. Using your own data centers for some of your applications and scaling out into another provider’s data centers when needed? That’s hybrid, too. As soon as you start using multiple types of infrastructure, by definition, you’ve created a hybrid environment.

And Throw in the Kitchen Sink

Taking our simple definition of “hybrid” one step further, we find a few other variations of that term’s usage. Because the cloud stack is made up of several levels of services—Infrastructure as a Service, Platform as a Service, Software as a Service, Business Process as a Service—“hybrid” may be defined by incorporating various “aaS” offerings into a single environment.

Perhaps you need bare metal infrastructure to build an off-prem private cloud at the IaaS level—and you also want to incorporate a managed analytics service at the BPaaS level. Or maybe you want to keep all of your production data on-prem and do your sandbox development in a PaaS environment like Bluemix. At the end of the day, what you’re really doing is leveraging a “hybrid” model.

Where do we go from here?

Once we can agree that this underlying semantic problem exists, we should be able to start having better conversations:

  • Them: We’re considering a hybrid approach to hosting our next application.
  • You: Oh yeah? What platforms or tools are we going to use in that approach?
  • Them: We want to try and incorporate public and private cloud infrastructure.
  • You: That’s interesting. I know that there are a few different definitions of public and private when it comes to infrastructure…which do you mean?
  • Them: That’s a profound observation! Since we have our own data centers, we consider the infrastructure there to be our private cloud, and we’re going to use bare metal servers from SoftLayer as our public cloud.
  • You: Brilliant! Especially the fact that we’re using SoftLayer.

Your mileage may vary, but that’s the kind of discussion we can get behind.

And if your conversation partner balks at either of your questions, send them over to this blog post series.

-@khazard

December 18, 2015

Semantics: "Public, "Private," and "Hybrid" in Cloud Computing, Part I

What does the word “gift” mean to you? In English, it most often refers to a present or something given voluntarily. In German, it has a completely different meaning: “poison.” If a box marked “gift” is placed in front of an English-speaker, it’s safe to assume that he or she would interact with it very differently than a German-speaker would.

In the same way, simple words like “public,” “private,” and “hybrid” in cloud computing can mean very different things to different audiences. But unlike our “gift” example above (which would normally have some language or cultural context), it’s much more difficult for cloud computing audiences to decipher meaning when terms like “public cloud,” “private cloud,” and “hybrid cloud” are used.

We, as an industry, need to focus on semantics.

In this two-part series, we’ll look at three different definitions of “public” and “private” to set the stage for a broader discussion about “hybrid.”

“Public” v. “Private”

Definition 1—Location: On-premises v. Off-premises

For some audiences (and the enterprise market), whether an infrastructure is public or private is largely a question of location. Does a business own and maintain the data centers, servers, and networking gear it uses for its IT needs, or does the business use gear that’s owned and maintained by another party?

This definition of “public v. private” makes sense for an audience that happens to own and operate its own data centers. If a business has exclusive physical access to and ownership of its gear, the business considers that gear “private.” If another provider handles the physical access and ownership of the gear, the business considers that gear “public.”

We can extend this definition a step further to understand what this audience would consider to be a “private cloud.” Using this definition of “private,” a private cloud is an environment with an abstracted “cloud” management layer (a la OpenStack or CloudStack or VMWare) that runs in a company’s own data center. In contrast, this audience would consider a “public cloud” to be a similar environment that’s owned and maintained by another provider.

Enterprises are often more likely to use this definition because they’re often the only ones that can afford to build and run their own data centers. They use “public” and “private” to distinguish between their own facilities or outside facilities. This definition does not make sense for businesses that don’t have their own data center facilities.

Definition 2—Population: Single-tenant v. Multi-tenant

Businesses that don’t own their own data center facilities would not use Definition 1 to distinguish “public” and “private” infrastructure. If the infrastructure they use is wholly owned and physically maintained by another provider, these businesses are most interested in whether hardware resources are shared with any other customers: Do any other customers have data on or access to a given server’s hardware? If so, the infrastructure is public. If not, the infrastructure is private.

Using this definition, public and private infrastructure could be served from the same third-party-owned data center, and the infrastructure could even be in the same server rack. “Public” infrastructure just happens to provide multiple users with resources and access to a single hardware node. Note: Even though the hardware node is shared, each user can only access his or her own data and allotted resources.

On the flip side, if a user has exclusive access to a hardware node, a business using Definition 2 would consider the node to be private.

Using this definition of “public” and “private,” multiple users share resources at the server level in a “public cloud” environment—and only one user has access to resources at the server level in a “private cloud” environment. Depending on the environment configuration, a “private cloud” user may or may not have full control over the individual servers he or she is using.

This definition echoes back to Definition 1, but it is more granular. Businesses using Definition 2 believe that infrastructure is public or private based on single-tenancy or multi-tenancy at the hardware level, whereas businesses using Definition 1 consider infrastructure to be public or private based on whether the data center itself is single-tenant or multi-tenant.

Have we blown your minds yet? Stay tuned for Part II, where we’ll tackle bare metal servers, virtual servers, and control. We’ll also show you how clear hybrid environments really are, and we’ll figure out where the heck we go from here now that we’ve figured it all out.

-@khazard

December 2, 2015

Cloud, Interrupted: The Official SoftLayer Podcast, Episode 2

Remember that one time we put three chatty cloud guys in a tiny room without windows (where no one can hear you scream) to talk cloud way back in September? Yeah, we do, too. Those were the days. In the second episode of our official, esteemed podcast—Cloud, Interrupted, "Cloud security and Daylight Saving Time drive us insane." for those of you following along at home—we have reasons! Reasons why this is only our second episode! Reasons that make sense! Because we owe it to you, our most loyal listeners. Join Kevin Hazard, director of digital content, Phil Jackson, lead technology evangelist, and Teddy Vandenberg, manager of network provisioning, as they wreak havoc interrupting the world of cloud. Again.

If you TL;DR-ed that intro, here’s the meat and potatoes of our latest podcast. Dig in:

  • [00:00:01] WE NOW HAVE THE BLEEP BLOOPS.
  • [00:01:21] The real reason our second podcast is fashionably late.
  • [00:03:16] It’s not that we’re insane when it comes to Internet security; it’s that no one understands us.
  • [00:06:14] Stay out of our bowels, Kevin!
  • [00:07:19] When you move to the cloud, you’re making all the same security mistakes you always make—multiplied by 10.
  • [00:10:30] What are cloud providers obligated to do in terms of security for their customers?
  • [00:13:00] Yes, we interrupted our cloud conversation (insert groan here). We now hate ourselves for it.
  • [00:13:23] Phil attended a tech conference on a ranch in Ireland (Web Summit), where he experienced Segway-less Segway envy and encountered zombies with attached earlobes. (Learn more about Artomatix: Artomatix Customer Story)
  • [00:20:08] You’re the bleep bloop master, Phil.
  • [00:20:48] Teddy rants (and rants) about Daylight Saving Time while we cower in the corner.
  • [00:24:07] If we do Daylight Saving Time in Unix, are we not taking Teddy seriously?
  • [00:25:27] Conclusion: Teddy hates time. (Yes, still ranting.)
  • [00:25:59] It’s over for everyone—not just Kevin.
  • [00:27:01] Oh, and one more thing, Teddy…

And that’s all she wrote, folks. -Fayza

Subscribe to cloud