Posts Tagged 'Enterprise'

September 12, 2013

"Cloud First" or "Mobile First" - Which Development Strategy Comes First?

Company XYZ knows that the majority of its revenue will come from recurring subscriptions to its new SaaS service. To generate visibility and awareness of the SaaS offering, XYZ needs to develop a mobile presence to reach the offering's potential audience. Should XYZ focus on building a mobile presence first (since its timing is most critical), or should it prioritize the completion of the cloud service first (since its importance is most critical)? Do both have to be delivered simultaneously?

It's the theoretical equivalent of the "Which came first: The chicken or the egg?" causality dilemma for many technology companies today.

Several IBM customers have asked me recently about whether the implementation of a "cloud first" strategy or a "mobile first" strategy is most important, and it's a fantastic question. They know that cloud and mobile are not mutually exclusive, but their limited development resources demand that some sort of prioritization be in place. However, should this prioritization be done based on importance or urgency?

IBM MobileFirst

The answer is what you'd expect: It depends! If a company's cloud offering consists solely of back-end services (i.e. no requirement or desire to execute natively on a mobile device), then a cloud-first strategy is clearly needed, right? A mobile presence would only be effective in drawing customers to the back-end services if they are in place and work well. However, what if the cloud offering is targeting only mobile users? Not focusing on the mobile-first user experience could sabotage a great set of back-end services.

As this simple example illustrated, prioritizing one development strategy at the expense of the other strategy can have devastating consequences. In this "Is there an app for that?" generation, a lack of predictable responsiveness for improved quality of service and/or quality of experience can drive your customers to your competitors who are only a click away. Continuous delivery is an essential element of both "cloud first and "mobile first" development. The ability to get feedback quickly from users for new services (and more importantly incorporate that feedback quickly) allows a company to re-shape a service to turn existing users into advocates for the service as well as other adjacent or tiered services. "Cloud first" developers need a cloud service provider that can provide continuous delivery of predictable and superior compute, storage and network services that can be optimized for the type of workload and can adapt to changes in scale requirements. "Mobile first" developers need a mobile application development platform that can ensure the quality of the application's mobile user experience while allowing the mobile application to also leverage back-end services. To accommodate both types of developers, IBM established two "centers of gravity" to allow our customers to strike the right balance between their "cloud first" and "mobile first" development.

It should come as no surprise that the cornerstone of IBM's cloud first offering is SoftLayer. SoftLayer's APIs to its infrastructure services allow companies to optimize their application services based on the needs of application, and the SoftLayer network also optimizes delivery of the application services to the consumer of the service regardless of the location or the type of client access.

For developers looking to prioritize the delivery of services on mobile devices, we centered our MobileFirst initiative on Worklight. Worklight balances the native mobile application experience and integration with back-end services to streamline the development process for "mobile first" companies.

We are actively working on the convergence of our IBM Cloud First and Mobile First strategies via optimized integration of SoftLayer and Worklight services. IBM customers from small businesses through large enterprises will then be able to view "cloud first and "mobile first" as two sides of the same development strategy coin.

-Mac

Mac Devine is an IBM distinguished engineer, director of cloud innovation and CTO, IBM Cloud Services Division. Follow him on Twitter: @mac_devine.

September 27, 2011

The Challenges of Cloud Security Below 10,000 Feet

This guest blog was contributed by Wendy Nather, Research Director, Enterprise Security Practice at The 451 Group. Her post comes on the heels of the highly anticipated launch of StillSecure's Cloud SMS, and it provides some great context for the importance of security in the cloud. For more information about Cloud SMS, visit www.stillsecure.com and follow the latest updates on StillSecure's blog, The Security Samurai.

If you're a large enterprise, you're in pretty good shape for the cloud: you know what kind of security you want and need, you have security staff who can validate what you're getting from the provider, and you can hold up your end of the deal – since it takes both customer and provider working together to build a complete security program. Most of the security providers out there are building for you, because that's where the money is; and they're eager to work on scaling up to meet the requirements for your big business. If you want custom security clauses in a contract, chances are, you'll get them.

But at the other end of the scale there are the cloud customers I refer to as being "below the security poverty line." These are the small shops (like your doctor's medical practice) that may not have an IT staff at all. These small businesses tend to be very dependent on third party providers, and when it comes to security, they have no way to know what they need. Do they really need DLP, a web application firewall, single sign-on, log management, and all the premium security bells and whistles? Even if you gave them a free appliance or a dedicated firewall VM, they wouldn't know what to do with it or have anyone to run it.

And when a small business has only a couple of servers in a decommissioned restroom*, the provider may be able to move them to their cloud, but it may not be able to scale a security solution down far enough to make it simple to run and cost-effective for either side. This is the great challenge today: to make cloud security both effective and affordable, both above and below 10,000 feet, no matter whether you're flying a jumbo airliner or a Cessna.

-Wendy Nather, The 451 Group

*True story. I had to run some there.

December 1, 2010

Every Cloud Has a Silver Lining

Last week, Netflix made headlines when the company announced that it was moving most of its operations to Amazon Web Services' Elastic Compute Cloud. The news was greeted with enthusiasm and was seen as further justification of the public cloud. Rightly so: the fact that Netflix generates up to 20% of US traffic during peak times, and that this traffic is moving to the public cloud would seem justification to me. This is a great piece of advertising for the cloud (much better than Microsoft's "to the cloud" campaign), and by proxy a great piece of advertising for SoftLayer.

So why did Netflix make the move? Economics - plain and simple. It is less expensive to move to the cloud than it is to continue supporting everything via internal Netflix DCs. In a cloud model, peak traffic loads dictate Netflix's economics - they pay for peaks, but only when they occur. When traffic drops off, Netflix enjoys the resultant cost savings, plus they relieve themselves of a considerable management burden. The argument is straightforward.

All of this has brought me back to consideration of the "private" cloud (which is arguably not the cloud at all, but I digress) and the value that it offers. The industry definition of a private cloud is a cloud implementation that is internal to (still owned and operated by) a single enterprise. SoftLayer defines a private cloud a little differently: SoftLayer remains the IAAS provider, but we ensure that a customer is on a single node (i.e. server). This conversation will stick with the industry definition.

So, what are the impacts of the private cloud across an enterprise?

In theory, a private cloud would give individual departments or discrete project teams the ability to better manage cost. As with a public cloud, a project team would be able to take advantage of the cost savings that come with paying only for what they use. However, this means that change is necessary in corporate accounting functions given that systems now need to be managed based on a "pay as you go model" versus a cost center model. This means a fundamental change in IT philosophy, as they now need to bill departments working on a variable use model - all of a sudden they have to think more like a business unit with a P&L to manage.

The cloud provides the ability to quickly spin up and scale. In the SoftLayer world, this translates to availability in anywhere from 2-4 hours. This should mean an increase in operational efficiency across departments using the cloud. Projects can start and end quickly without concern for lengthy implementation or teardown windows. That said I am not sure this increase in efficiency is meaningful when balanced against an IT department that must build and support a cloud infrastructure that has to account for the operation across the entire enterprise. The impact is potentially great at a micro level, but wanes when you consider the larger picture.

From a planning point of view, IT must now consider what a variable use model means in practical terms. Different departments will experience different peaks and valleys based on different workloads. In all likelihood, these peaks will not align on the calendar, nor will they be consistent month over month. In addition, my assumption is that deployment of a cloud will engender unanticipated usage patterns given the supposed cost and operational flexibility that the cloud delivers within the enterprise. The challenge will be to balance these needs against the delivery of a service that will adhere to QoS promises and associated internal service level agreements. ( And I think they will have to exist. If IT moves beyond a cost center, and internal organizations are trying to budget based upon forecasted compute use, it only makes sense that IT will be held up against external providers. Indeed, I would expect some rogue departments to go off the reservation to external providers based on cost considerations alone.)

My guess is that the IT response to planning will be predictable - over-engineer the private cloud to make sure that it is bullet proof. This might work, but it will be expensive and will paradoxically lead to underutilization of what is an over planned resource - something the cloud is supposed to mitigate. This approach is also likely to lead to IT bloat as capable internal resources are likely to be thin, driving a round of hiring to ensure expertise is on hand to manage the cloud.

In addition, I would assume that some applications will continue to be supported (for a variety of reasons - security, I/O challenges), thus adding more cost to the equation.

The arguments against the private cloud are numerous and ought to give the enterprise pause for thought. Regardless, I am willing to bet that private cloud implementations will accelerate in the enterprise. Many companies are supported by IT organizations that are strong and well entrenched within the corporate culture. Part of the fight will be based around losing budget, headcount and perceived power if the decision is made to go to an external IAAS provider. All of the usual rubrics will be used: security, quality of service, performance and on the list goes. In essence, these are the same arguments that have been made in the past when a decision to outsource anything has come to the fore. Does it make sense to outsource everything? The answer is a resounding no, but the argument for looking to the public cloud, or SoftLayer's private cloud is strong.

-@gkdog

November 10, 2010

The Custom-Made Cloud

Not to toot my own horn, but I am an actual Rocket Scientist (well an Aerospace Engineer, but Rocket Scientist sounds way cooler). When you are a Rocket Scientist, most of your time is spent in dealing with facts – universal constants, formulas, and a data set that has been validated countless times over. My role at the CTO at SoftLayer is sometimes a challenge because I have to deal with the terrific hyperbole that the tech world inevitably creates. Consider the Segway, Unified Messaging, etc. I think that cloud computing has also fallen prey.

The cloud promises a lot and it does deliver a lot.

  • Control puts decisions and actions in the hands of the customer. Self-service interfaces enable automated infrastructure provisioning, monitoring, and management. APIs provide even greater automation by supporting integration with other tools and processes, and enabling applications to self-manage.
  • Flexibility provides a broader range of capabilities and choices, enabling the customer to strike ideal balance of capital and operating expenses. In addition, access to additional infrastructure resources happens in minutes rather than week enabling you to respond "on demand" to changes in demand.
  • Flexibility and control combined give administrators more choice. Who manages infrastructure (Internal staff or service provider?) Where are workloads processed in an internal datacenter or in the public cloud? When are workloads processed – is this resource-driven or priority-driven? How much to consume – is this policy-driven or demand-driven? How is IT consumed – via central administration or self-service?

Despite its numerous benefits, the operational and cost effectiveness for many enterprises is challenged by the fact that most cloud services come in limited configurations and only serve as standalone solutions. One cloud does not fit all – Fixed specs do not allow administrators to optimize a cloud environment with the ratio of processing power, memory or storage that its intended application needs for its best performance. Most cloud service providers offer a relatively small number of preconfigured virtual machine images (VMI), often starting with small, medium and large VMIs, each with preset amounts of CPU, RAM and storage. The challenge is that even a few sizes (versus only one) don't fit everybody's needs. For example, applications perform best when they are running on servers with optimized configurations. And every application has unique resource demands. If the server is "too small," performance issues may arise. If the server is "too large," the customer ends up paying for more resources than necessary.

To a degree we have already been doing lots of "cloudy" things given our focus on automation. Combine that with a set of tools that let customers self-provision and I think you see where I am headed. The next step up the value chain is SoftLayer's "Build Your Own Cloud" solution. It delivers all of the benefits that I discussed above, but adds the logical step of handing configuration control to the customer. Customers are able to determine a number of things about the environment that their cloud sits on.

Cloud Computing Options
Cloud Computing Options Part Two (Monthly)

The end result is a cloud environment that is fit for customer purpose and customer cost. A classic win-win situation. I wonder what we will think of next.

-@nday91

May 29, 2008

Plot Course to Vulcan, Warp Factor 8. Engage!

Resolutely pointing off into the starry void of space on the bridge of the Enterprise, klieg lights gleaming off his majestic dome, Captain Picard causes the Starship Enterprise to leap off on another mission. Once asked how the “warp drive” worked on Star Trek, Patrick Stewart claimed that “I say Engage and we go.” Best explanation of warp drive I’ve ever heard.

I find I miss my Linux install. Due to circumstances beyond my control (i.e. I’m too lazy to stop being lazy), and the fact that few games work well on Linux without lots of under-the-hood tweaking, I broke down and bought a Windows installation for my PC. In between mining asteroids in my Retriever Mining Ship and solving 3D puzzles with a transdimensional gun, I do normal work with my computer; programming, web design, web browsing, video editing, file management, the whole deal.

Windows Vista, however, has a new feature that makes my work awesome. No, I’m not talking about the 3D accelerated desktop with semitransparent windows (although that IS awesome). I’m talking about the new Start Menu search box.

In Windows XP (I’m doing this right now), hitting the Windows key opens up the start menu. I can either use the mouse to navigate the menu (why use the start key if you’re going to mouse the menu?), or navigate with the keyboard arrows. However, this can be quite tedious and slow. If I remember the program’s “.EXE” name and the program is on the Windows System Path, I can select “Run…” and type in the name, like wmplayer for Windows Media Player. But the names are funky and again, the cool programs aren’t on the path.

In Windows Vista, however, when you bump the start menu, a new device, the SEARCH BOX, is automatically engaged in the start menu! So, when I want to use, say, Notepad, I type ‘windows key notepad enter’. Goldwave (sound recording) is ‘windows key goldwave enter’. When I want to use a Open Office tool, I bump the Windows key, type “open office” and then select the tool I want with the arrow keys, as the search box narrows down the huge Start Menu to just the entries that make sense. Even cooler: when it’s budget time, I hit the Windows key then type “budget”. Search brings up “Apartment Budget.ods”. Select that with the arrow keys, and it opens Open Office Calc (spreadsheet) for me.

It’s like having a command line in Windows. Any program is just a few keystrokes away, and for a Linux nut and a touch typer like me, means that my computer is that much more efficient. I don’t need muscle memory with the mouse to navigate the start menu, I don’t have to squint at the menu items and find my program. I just have to remember the name!

Try it some time. It’s almost as awesome as saying “Engage” and going to Vulcan.

-Shawn

Categories: 
Subscribe to enterprise