Posts Tagged 'Services'

September 12, 2013

"Cloud First" or "Mobile First" - Which Development Strategy Comes First?

Company XYZ knows that the majority of its revenue will come from recurring subscriptions to its new SaaS service. To generate visibility and awareness of the SaaS offering, XYZ needs to develop a mobile presence to reach the offering's potential audience. Should XYZ focus on building a mobile presence first (since its timing is most critical), or should it prioritize the completion of the cloud service first (since its importance is most critical)? Do both have to be delivered simultaneously?

It's the theoretical equivalent of the "Which came first: The chicken or the egg?" causality dilemma for many technology companies today.

Several IBM customers have asked me recently about whether the implementation of a "cloud first" strategy or a "mobile first" strategy is most important, and it's a fantastic question. They know that cloud and mobile are not mutually exclusive, but their limited development resources demand that some sort of prioritization be in place. However, should this prioritization be done based on importance or urgency?

IBM MobileFirst

The answer is what you'd expect: It depends! If a company's cloud offering consists solely of back-end services (i.e. no requirement or desire to execute natively on a mobile device), then a cloud-first strategy is clearly needed, right? A mobile presence would only be effective in drawing customers to the back-end services if they are in place and work well. However, what if the cloud offering is targeting only mobile users? Not focusing on the mobile-first user experience could sabotage a great set of back-end services.

As this simple example illustrated, prioritizing one development strategy at the expense of the other strategy can have devastating consequences. In this "Is there an app for that?" generation, a lack of predictable responsiveness for improved quality of service and/or quality of experience can drive your customers to your competitors who are only a click away. Continuous delivery is an essential element of both "cloud first and "mobile first" development. The ability to get feedback quickly from users for new services (and more importantly incorporate that feedback quickly) allows a company to re-shape a service to turn existing users into advocates for the service as well as other adjacent or tiered services. "Cloud first" developers need a cloud service provider that can provide continuous delivery of predictable and superior compute, storage and network services that can be optimized for the type of workload and can adapt to changes in scale requirements. "Mobile first" developers need a mobile application development platform that can ensure the quality of the application's mobile user experience while allowing the mobile application to also leverage back-end services. To accommodate both types of developers, IBM established two "centers of gravity" to allow our customers to strike the right balance between their "cloud first" and "mobile first" development.

It should come as no surprise that the cornerstone of IBM's cloud first offering is SoftLayer. SoftLayer's APIs to its infrastructure services allow companies to optimize their application services based on the needs of application, and the SoftLayer network also optimizes delivery of the application services to the consumer of the service regardless of the location or the type of client access.

For developers looking to prioritize the delivery of services on mobile devices, we centered our MobileFirst initiative on Worklight. Worklight balances the native mobile application experience and integration with back-end services to streamline the development process for "mobile first" companies.

We are actively working on the convergence of our IBM Cloud First and Mobile First strategies via optimized integration of SoftLayer and Worklight services. IBM customers from small businesses through large enterprises will then be able to view "cloud first and "mobile first" as two sides of the same development strategy coin.

-Mac

Mac Devine is an IBM distinguished engineer, director of cloud innovation and CTO, IBM Cloud Services Division. Follow him on Twitter: @mac_devine.

June 4, 2013

IBM to Acquire SoftLayer

As most have seen by now, this morning we announced IBM's intent to acquire SoftLayer. It's not just big news, it's great news for SoftLayer and our customers. I'd like to take a moment and share a little background on the deal and pass along a few resources to answer questions you may have.

We founded SoftLayer in 2005 with the vision of becoming the de facto platform for the Internet. We committed ourselves to automation and innovation. We could have taken shortcuts to make a quick buck by creating manual processes or providing one-off services, but we invested in processes that would enable us to build the strongest, most scalable, most controllable foundation on which customers can build whatever they want. We created a network-within-a-network topology of three physical networks to every SoftLayer server, and all of our services live within a unified API. "Can it be automated?" was not the easiest question to ask, but it's the question that enabled us to grow at Internet scale.

As part of the newly created IBM Cloud Services division, customers and clients from both companies will benefit from a higher level of choice and a higher level of service from a single partner. More important, the real significance will come as we merge technology that we developed within the SoftLayer platform with the power and vision that drives SmartCloud and pioneer next-generation cloud services. It might seem like everyone is "in the cloud" now, but the reality is that we're still in the early days in this technology revolution. What the cloud looks like and what businesses are doing with it will change even more in the next two years than it has in the last five.

You might have questions in the midst of the buzz around this acquisition, and I want you to get answers. A great place to learn more about the deal is the SoftLayer page on IBM.com. From there, you can access a FAQ with more information, and you'll also learn more about the IBM SmartCloud portfolio that SoftLayer will compliment.

A few questions that may be top of mind for the customers reading this blog:

How does this affect my SoftLayer services?
Between now and when the deal closes (expected in the third quarter of this year), SoftLayer will continue to operate as an independent company with no changes to SoftLayer services or delivery. Nothing will change for you in the foreseeable future.

Your SoftLayer account relationships and support infrastructure will remain unchanged, and your existing sales and technical representatives will continue to provide the support you need. At any time, please don't hesitate to reach out to your SoftLayer team members.

Over time as any changes occur, information will be communicated to customers and partners with ample time to allow for planning and a smooth transition. Our customers will benefit from the combined technologies and skills of both companies, including increased investment, global reach, industry expertise and support available from IBM, along with IBM and SoftLayer's joint commitment to innovation.

Once the acquisition has been completed, we will be able to provide more details.

What does it mean for me?
We entered this agreement because it will enable us to continue doing what we've done since 2005, but on an even bigger scale and with greater opportunities. We believe in its success and the opportunity it brings customers.

It's going to be a smooth integration. The executive leadership of both IBM and SoftLayer are committed to the long-term success of this acquisition. The SoftLayer management team will remain part of the integrated leadership team to drive the broader IBM SmartCloud strategy into the marketplace. And IBM is best-in-class at integration and has a significant track record of 26 successful acquisitions over the past three years.

IBM will continue to support and enhance SoftLayer's technologies while enabling clients to take advantage of the broader IBM portfolio, including SmartCloud Foundation, SmartCloud Services and SmartCloud Solutions.

-@lavosby

UPDATE: On July 8, 2013, IBM completed its acquisition of SoftLayer: http://sftlyr.com/30z

September 24, 2012

Cloud Computing is not a 'Thing' ... It's a way of Doing Things.

I like to think that we are beyond 'defining' cloud, but what I find in reality is that we still argue over basics. I have conversations in which people still delineate things like "hosting" from "cloud computing" based degrees of single-tenancy. Now I'm a stickler for definitions just like the next pedantic software-religious guy, but when it comes to arguing minutiae about cloud computing, it's easy to lose the forest for the trees. Instead of discussing underlying infrastructure and comparing hypervisors, we'll look at two well-cited definitions of cloud computing that may help us unify our understanding of the model.

I use the word "model" intentionally there because it's important to note that cloud computing is not a "thing" or a "product." It's a way of doing business. It's an operations model that is changing the fundamental economics of writing and deploying software applications. It's not about a strict definition of some underlying service provider architecture or whether multi-tenancy is at the data center edge, the server or the core. It's about enabling new technology to be tested and fail or succeed in blazing calendar time and being able to support super-fast growth and scale with little planning. Let's try to keep that in mind as we look at how NIST and Gartner define cloud computing.

The National Institute of Standards and Technology (NIST) is a government organization that develops standards, guidelines and minimum requirements as needed by industry or government programs. Given the confusion in the marketplace, there's a huge "need" for a simple, consistent definition of cloud computing, so NIST had a pretty high profile topic on its hands. Their resulting Cloud Computing Definition describes five essential characteristics of cloud computing, three service models, and four deployment models. Let's table the service models and deployment models for now and look at the five essential characteristics of cloud computing. I'll summarize them here; follow the link if you want more context or detail on these points:

  • On-Demand Self Service: A user can automatically provision compute without human interaction.
  • Broad Network Access: Capabilities are available over the network.
  • Resource Pooling: Computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned.
  • Rapid Elasticity: Capabilities can be elastically provisioned and released.
  • Measured Service: Resource usage can be monitored, controlled and reported.

The characteristics NIST uses to define cloud computing are pretty straightforward, but they are still a little ambiguous: How quickly does an environment have to be provisioned for it to be considered "on-demand?" If "broad network access" could just mean "connected to the Internet," why include that as a characteristic? When it comes to "measured service," how granular does the resource monitoring and control need to be for something to be considered "cloud computing?" A year? A minute? These characteristics cast a broad net, and we can build on that foundation as we set out to create a more focused definition.

For our next stop, let's look at Gartner's view: "A style of computing in which scalable and elastic IT-enabled capabilities are delivered as a service using Internet infrastructure." From a philosophical perspective, I love their use of "style" when talking about cloud computing. Little differentiates the underlying IT capabilities of cloud computing from other types of computing, so when looking at cloud computing, we really just see a variation on how those capabilities are being leveraged. It's important to note that Gartner's definition includes "elastic" alongside "scalable" ... Cloud computing gets the most press for being able to scale remarkably, but the flip-side of that expansion is that it also needs to contract on-demand.

All of this describes a way of deploying compute power that is completely different than the way we did this in the decades that we've been writing software. It used to take months to get funding and order the hardware to deploy an application. That's a lot of time and risk that startups and enterprises alike can erase from their business plans.

How do we wrap all of those characteristics up into unified of definition of cloud computing? The way I look at it, cloud computing is as an operations model that yields seemingly unlimited compute power when you need it. It enables (scalable and elastic) capacity as you need it, and that capacity's pricing is based on consumption. That doesn't mean a provider should charge by the compute cycle, generator fan RPM or some other arcane measurement of usage ... It means that a customer should understand the resources that are being invoiced, and he/she should have the power to change those resources as needed. A cloud computing environment has to have self-service provisioning that doesn't require manual intervention from the provider, and I'd even push that requirement a little further: A cloud computing environment should have API accessibility so a customer doesn't even have to manually intervene in the provisioning process (The customer's app could use automated logic and API calls to scale infrastructure up or down based on resource usage).

I had the opportunity to speak at Cloud Connect Chicago, and I shared SoftLayer's approach to cloud computing and how it has evolved into a few distinct products that speak directly to our customers' needs:

The session was about 45 minutes, so the video above has been slimmed down a bit for easier consumption. If you're interested in seeing the full session and getting into a little more detail, we've uploaded an un-cut version here.

-Duke

July 25, 2012

ServerDensity: Tech Partner Spotlight

We invite each of our featured SoftLayer Tech Marketplace Partners to contribute a guest post to the SoftLayer Blog, and this week, we're happy to welcome David Mytton, Founder of ServerDensity. Server Density is a hosted server and website monitoring service that alerts you when your website is slow, down or back up.

5 Ways to Minimize Downtime During Summer Vacation

It's a fact of life that everything runs smoothly until you're out of contact, away from the Internet or on holiday. However, you can't be available 24/7 on the chance that something breaks; instead, there are several things you can do to ensure that when things go wrong, the problem can be managed and resolved quickly. To help you set up your own "get back up" plan, we've come up with a checklist of the top five things you can do to prepare for an ill-timed issue.

1. Monitoring

How will you know when things break? Using a tool like Server Density — which combines availability monitoring from locations around the world with internal server metrics like disk usage, Apache and MySQL — means that you can be alerted if your site goes down, and have the data to find out why.

Surprisingly, the most common problems we see are some that are the easiest to fix. One problem that happens all too often is when a customer simply runs out of disk space in a volume! If you've ever had it happen to you, you know that running out of space will break things in strange ways — whether it prevents the database from accepting writes or fails to store web sessions on disk. By doing something as simple as setting an alert to monitor used disk space for all important volumes (not just root) at around 75%, you'll have proactive visibility into your server to avoid hitting volume capacity.

Additionally, you should define triggers for unusual values that will set off a red flag for you. For example, if your Apache requests per second suddenly drop significantly, that change could indicate a problem somewhere else in your infrastructure, and if you're not monitoring those indirect triggers, you may not learn about those other problems as quickly as you'd like. Find measurable direct and indirect relationships that can give you this kind of early warning, and find a way to measure them and alert yourself when something changes.

2. Dealing with Alerts

It's no good having alerts sent to someone who isn't responding (or who can't at a given time). Using a service like Pagerduty allows you to define on-call rotations for different types of alerts. Nobody wants to be on-call every hour of every day, so differentiating and channeling alerts in an automated way could save you a lot of hassle. Another huge benefit of a platform like Pagerduty is that it also handles escalations: If the first contact in the path doesn't wake up or is out of service, someone else gets notified quickly.

3. Tracking Incidents

Whether you're the only person responsible or you have a team of engineers, you'll want to track the status of alerts/issues, particularly if they require escalation to different vendors. If an incident lasts a long time, you'll want to be able to hand it off to another person in your organization with all of the information they need. By tracking incidents with detailed notes information, you can avoid fatigue and prevent unnecessary repetition of troubleshooting steps.

We use JIRA for this because it allows you to define workflows an issue can progress along as you work on it. It also includes easy access to custom fields (e.g. specifying a vendor ticket ID) and can be assigned to different people.

4. Understanding What Happened

After you have received an alert, acknowledged it and started tracking the incident, it's time to start investigating. Often, this involves looking at logs, and if you only have one or two servers, it's relatively easy, but as soon as you add more, the process can get exponentially more difficult.

We recommend piping them all into a log search tool like (fellow Tech Partners Marketplace participant) Papertrail or Loggly. Those platforms afford you access to all of your logs from a single interface with the ability to see incoming lines in real-time or the functionality to search back to when the incident began (since you've clearly monitored and tracked all of that information in the first three steps).

5. Getting Access to Your Servers

If you're traveling internationally, access to the Internet via a free hotspot like the ones you find in Starbucks isn't always possible. It's always a great idea to order a portable 3G hotspot in advance of a trip. You can usually pick one up from the airport to get basic Internet access without paying ridiculous roaming charges. Once you have your connection, the next step is to make sure you can access your servers.

Both iPhone and Android have SSH and remote desktop apps available which allow you to quickly log into your servers to fix easy problems. Having those tools often saves a lot of time if you don't have access to your laptop, but they also introduce a security concern: If you open server logins to the world so you can login from the dynamic IPs that change when you use mobile connectivity, then it's worth considering a multi-factor authentication layer. We use Duo Security for several reasons, with one major differentiator being the modules they have available for all major server operating systems to lock down our logins even further.

You're never going to escape the reality of system administration: If your server has a problem, you need to fix it. What you can get away from is the uncertainty of not having a clearly defined process for responding to issues when they arise.

-David Mytton, ServerDensity

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
May 10, 2012

The SoftLayer API and its 'Star Wars' Sibling

When I present about the SoftLayer API at conferences and meetups, I often use an image that shows how many of the different services in the API are interrelated and connected. As I started building the visual piece of my presentation, I noticed a curious "coincidence" about the layout of the visualization:

SoftLayer API Visualization

What does that look like to you?

You might need to squint your eyes and tilt your head or "look beyond the image" like it's one of those "Magic Eye" pictures, but if you're a geek like me, you can't help but notice a striking resemblance to one of the most iconic images from Star Wars:

SoftLayer API == Death Star?

The SoftLayer API looks like the Death Star.

The similarity is undeniable ... The question is whether that resemblance is coincidental or whether it tells us we can extrapolate some kind of fuller meaning as in light of the visible similarities. I can hear KHazzy now ... "Phil, While that's worth a chuckle and all, there is no way you can actually draw a relevant parallel between the SoftLayer API and The Death Star." While Alderaan may be far too remote for an effective demonstration, this task is no match for the power of the Phil-side.

Challenge Accepted.

The Death Star: A large space station constructed by the Galactic Empire equipped with a super-laser capable of destroying an entire planet.

The SoftLayer API: A robust set of services and methods which provide programmatic access to all portions of the SoftLayer Platform capable of automating any task: administrative, configuration or otherwise.

Each is the incredible result of innovation and design. The construction of the Death Star and creation of the SoftLayer API took years of hard work and a significant investment. Both are massive in scale, and they're both effective and ruthless when completing their objectives.

The most important distinction: The Death Star was made to destroy while the SoftLayer API was made to create ... The Death Star was designed to subjugate a resistance force and destroy anything in the empire's way. The SoftLayer API was designed to help customers create a unified, automated way of managing infrastructure; though in the process, admittedly that "creation" often involves subjugating redundant, compulsory tasks.

The Death Star and the SoftLayer API can both seem pretty daunting. It can be hard to find exactly what you need to solve all of your problems ... Whether that be an exhaust port or your first API call. Fear not, for I will be with you during your journey, and unlike Obi-Wan Kenobi, I'm not your only hope. There is no need for rebel spies to acquire the schematics for the API ... We publish them openly at sldn.softlayer.com, and we encourage our customers to break the API down into the pieces of functionality they need.

-Phil (@SoftLayerDevs)

January 27, 2012

Deciphering SoftLayer Acronyms

As a bit of an introduction, I began my career as a GSP and hosted LAMP sites with WHM for SMBs ... NBD. If you're not fluent in "Tech Geek Acronym," that sentence may as well be written in Greek. If I were to de-acronym it, I'd say, "I began my career as a Game Service Provider" and hosted Linux, Apache, MySQL and PHP sites with Web Host Managed for Small- and Medium-sized Businesses ... no big deal." For many, the humble acronym is a cornerstone of what it means to be a true techie. Stringing together dozens of three-letter abbreviations (TLAs) to compose semi-coherent sentences would seem to demonstrate your mastery of technology ... The problem is that if the reader of that sentence doesn't have the context you have, it's not very easy to easily get up to speed.

Every profession has their collection of acronyms. The little expressions serve as a verbal and written short hand for people who toil daily with the topics of their trade. I'm proud to confess that I've been using these minute medleys of letters for over twelve years. Given that I work on the Internet, I've been exposed to hundreds of acronyms in the fields of technology, business and management, and in my experience, I've had to break through several acronym "barriers" to get in the know. Because I happen to interact with customers every day as the manager of SoftLayer's technical support department, I've encountered a few "Can you tell means?" responses, so I thought I'd write a quick blog post to clarify some of the common acronyms you may see in the SoftLayer vernacular.

Within support we have our CSTs (customer support technicians) and CSAs (customer support admins) who, with the help of SBTs (server build technicians), manage our massive fleet of servers. SBTs are the hands and eyes of our data centers, working closely with the hardware to ensure your server is online and operating in peak condition. The CSTs and CSAs are focused on the software and services that power your websites and applications.

Beyond employee title acronyms, you'll probably see a collection of terms that describe the products and services that we manage. In support, we receive questions about accessing servers or CCIs (cloud computing instances) using KVM (Keyboard, Video and Mouse) or IPMI (Intelligent Platform Management Interface) through our VPN (Virtual Private Network). Once connected to our back-end network through a SSL (Secure Socket Layer), PPTP (Point-to-Point Tunnel Protocol) or IPSEC (Internet Protocol Security) VPN, you have access to services such as DNS (Domain Name Service), NAS (Network Attached Storage) or iSCSI (Internet Small Computer System Interface). Finally, while discussing our network, I often refer to http://www.softlayer.com/diagrams/pod-network-diagram/dal05 to show the difference between a VER (VPN Edge Router) and a BCS (Back-end Customer Switch).

If you run across an acronym you don't understand in a ticket, please let us know so we can share its full meaning ... By using these shortened terms, our team can provider faster service (and you can read their responses quicker). I know that seeing all the bold TLAs above may seem a little off-putting initially, but as you have a chance to read them in the context of some of the other acronyms you already know, I hope you have an "Aha!" moment ... Like finding the Rosetta Stone or the Code of Hammurabi. Given the quick glance at the terms above, if you want to learn more about one of the TLAs in particular, leave a comment below, and we'll respond in another comment with details.

CBNO

-Chris

May 18, 2011

Panopta: Tech Partner Spotlight

This is a guest blog from Jason Abate of Panopta, a SoftLayer Tech Marketplace Partner specializing in monitoring your servers and managing outages with tools and resources designed to help minimize the impact of outages to your online business.

5 Server Monitoring Best Practices

Prior to starting Panopta, I was responsible for the technology and operations side of a major international hosting company and worked with a number of large online businesses. During this time, I saw my share of major disasters and near catastrophes and had a chance to study what works and what doesn't when Murphy's Law inevitably hits.

Monitoring is a key component of any serious online infrastructure, and there are a wide range of options when it comes to monitoring tools — from commercial and open-source software that you install and manage locally to monitoring services like Panopta. The best solution depends on a number of criteria, but there are five major factors to consider when making this decision.

1. Get the Most Accurate View of Your Infrastructure
Accuracy is a dual-edged sword when it comes to monitoring that can hurt you in two different ways. Check too infrequently and you'll miss outages entirely, making you think that things are rosy when your customers or visitors are actually encountering problems. There are tools that check every 30 minutes or more, but these are useless to real production sites. You should make sure that you can perform a complete check of your systems every 60 seconds so that small problems aren't overlooked.

I've seen many people setup this high-resolution monitoring only to be hit with a barrage of alerts for frequent short-lived problems which were previously never detected. It may hurt to find this, but at least with information about the problem you can fix it once and for all.

The flip side to accuracy is that your monitoring system needs to verify outages to ensure they are real in order to avoid sending out false alerts. There's no faster way to train an operations team to ignore the monitoring system than with false alerts. You want your team to jump at alerts when they come in.

High-frequency checks that are confirmed from multiple physical locations will ensure you get the most accurate view of your infrastructure possible.

2. Monitor Every Component of Your Infrastructure
There are lots of components that make up a modern website or application, and any of them could break at any time. You need to make sure that you're watching all of these pieces, whether they're inside your firewall or outside. Lots of monitoring providers focus purely on remotely accessible network services, which are important but only one half of the picture. You also want an inside view of how your server's resources are being consumed, and how internal-only network devices (such as backend database servers) are performing.

Completeness also means that it's economically feasible to watch everything. If the pricing structure of your monitoring tool is setup in a way that makes it cost prohibitive to watch everything then the value of your monitoring setup is greatly diminished. The last thing you want to run into when troubleshooting a complex problem is to find that you don't have data about one crucial server because you weren't monitoring it.

Make sure your monitoring system is able to handle all of your server and network components and gives you a complete view of your infrastructure.

3.Notify the Right People at the Right Time
You know when the pager beeps or the phone rings about an outage, your heart beats a little faster. Of course, it's usually in the middle of the night and you're sleeping right?! As painful as it may be, you want your monitoring system to get you up when things are really hitting the fan - it's still better than hearing from angry customers (and bosses!) the next morning.

However, not all outages are created equally and you may not want to be woken up when one of your clustered webservers briefly goes down and then corrects itself a few minutes later. The key to a successful monitoring solution is to have plenty of flexibility in your notification setup including being able to setup different notification types based on the criticality of the service.

You also want to be able to escalate a problem, bringing in additional resources for long-running problems. This way outages don't go unnoticed for hours while the on-call admin who perpetually sleeps through pages gets more shut-eye.

Make sure that when it comes to notification, your monitoring system is able to work with your team's preferred setup, not the other way around.

4. Don't Just Detect Problems, Streamline Fixing Them
Sending out alerts about a problem is important, but it's just the first step in getting things back to normal. Ideally after being alerted an admin can jump in and solve whatever the problem is and life goes on. All too often though, things don't go this smoothly.

You've probably run into situations where an on-call admin is up most of the night with a problem. That's great, but when the rest of the team comes in the next morning they have no idea what was done. What if the problem comes up again? Are there important updates that need to be deployed to other servers?

Or maybe you have a big problem that attracts interest from your call center and support staff (your monitoring system did alert you before they walked up, right?) Or management from other departments interrupt to get updates on the problem so they can head off a possible PR disaster.

These are important to the operation of your business, but they pull administrators away from actually solving the problem, which just makes things worse. There should be a better way to handle these situations. Given it's central role in your infrastructure management, your monitoring system is in a great position to help streamline the problem solving process.

Make sure your monitoring system gives you tools to keep everyone on the same page by letting everyone easily communicate and log what was ultimately done to resolve the problem.

5. Demonstrate how Your Infrastructure is Performing
Your role as an administrator is to keep your infrastructure up and running. It's unfortunately a tough spot to be in - do your job really well and no one notices. But mess up, and it's clearly visible to everyone.

Solid reporting capabilities from your monitoring system give you a tool to help balance this situation. Be sure to get summary reports that can demonstrate how well things are running or make the argument for making changes and then following up to show progress. Availability reports also let you see a "big picture" view of how your infrastructure is performing that often gets lost in the chaos of day-to-day operations.

Detailed reporting gives you the data you need to accurately assess and promote the health of your infrastructure.

The Panopta Difference
There are quite a few options available for monitoring your servers, each of which come with trade offs. We've designed Panopta to focus on these five criteria, and having built on top of SoftLayer's infrastructure from the very beginning are excited to be a part of the SoftLayer Technology Marketplace.

I would encourage you to try out Panopta and other solutions and see which is the best fit to the specific requirements for your infrastructure and your team - you'll appreciate what a good night's sleep feels like when you don't have to worry about whether your infrastructure is up and running.

-Jason Abate, Panopta

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
May 11, 2011

Acunote: Tech Partner Spotlight

This is a guest blog from Gleb Arshinov of Acunote, a SoftLayer Tech Marketplace Partner specializing in online project management and Scrum software.

Company Website: http://www.acunote.com
Tech Partners Marketplace: http://www.softlayer.com/marketplace/acunote

Implementing Project Management in Your Business

Project management has a bit of a stigma for being a little boring. In its simplest form, project management involves monitoring and reporting progress on a given initiative, and while it sounds simple, it's often an afterthought ... if it's ever a thought at all. Acunote is in the business of making project management easy and accessible for businesses of all sizes.

I've been in and around project management for years now, and while I could talk your ear off about Acunote, I'd rather share a few "Best Practices" for incorporating project management in your business. As you begin to understand how project management principles can be incorporated into your day-to-day activities, you'll be in a better position to understand the value proposition of tools like Acunote.

Track Planning, Not Just Execution
One of the biggest mistakes many companies make as they begin to incorporate project management is the tendency to track the progress on the execution of a project. While that aspect of the project is certainly the most visible, by monitoring the behind-the-scenes planning, you have a fuller view of where the project came from, where it is now and where it is expected to go in the future. It's difficult to estimate how long projects will take, and a lot of that difficulty comes from insufficient planning. By planning what will need to be done in what order, a bigger project becomes a series of smaller progress steps with planning and execution happening in tandem.

For many projects, especially for developers, it's actually impossible to predict most of what needs to get done upfront. That doesn't mean that there isn't a predictable aspect to a given project, though. Good processes and tools can capture how much of the work was planned upfront, how much was discovered during the project, and how the project evolved as a result. In addition to giving you direction as a project moves forward, documenting the planning and execution of a given project will also give you watermarks for how far the project has come (and why).

Use Tools and Resources Wisely
It's important to note that complexity of coordinating everything in a company increases exponentially as the company grows. With fewer than ten employees working on a project in a single department, you can probably get by without being very intentional in project management, but as you start adding users and departments that don't necessarily work together regularly, project management becomes more crucial to keep everyone on the same page.

The most effective project management tools are simple to implement and easy to use ... If a project management tool is a hassle to use, no one's going to use it. It should be sort of a "home base" for individual contributors to do their work efficiently. The more streamlined project management becomes in your operating practices, the more data it can generate and the more you (and your organization's management team) can learn from it.

Make Your Distributed Team Thrive
More and more, companies are allowing employees to work remotely, and while that changes some of the operations dynamics, it doesn't have to affect productivity. The best thing you can do to manage a thriving distributed team is to host daily status meetings to keep everyone on the same page. The more you communicate, the quicker you can adjust your plans if things move off-track, and with daily meetings, someone can only be a day behind their expectations before the project's status is reevaluated. With many of the collaboration tools available, these daily meetings can be accompanied by daily progress reports and real-time updates.

Acunote is designed to serve as a simple support structure and a vehicle to help you track and meet your goals, whether they be in development, accounting or marketing. We're always happy to help companies understand how project management can make their lives easier, so if you have any questions about what Acunote does or how it can be incorporated into your business, let us know: support@acunote.com

-Gleb Arshinov, Acunote

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
February 15, 2011

Five Ways to Use Your VPN

One of the many perks of being a SoftLayer customer is having access to your own private network. Perhaps you started out with a server in Dallas, later expanded to Seattle, and are now considering a new box in Washington, D.C. for complete geographic diversity. No matter the distance or how many servers you have, the private network bridges the gaps between you, your servers, and SoftLayer's internal services by bringing all of these components together into a secure, integrated environment that can be accessed as conveniently as if you were sitting right in the data center.

As if our cutting-edge management portal and API weren't enough, SoftLayer offers complimentary VPN access to the private network. This often-underestimated feature allows you to integrate your SoftLayer private network into your personal or corporate LAN, making it possible to access your servers with the same security and flexibility that a local network can offer.

Let's look at a few of the many ways you can take advantage of your VPN connection:

1. Unmetered Bandwidth

Unlike the public network that connects your servers to the outside world, the traffic on your private network is unlimited. This allows you to transfer as much data as you wish from one server to another, as well as between your servers and SoftLayer's backup and network storage devices – all for free.

When you use the VPN service to tap into the private network from your home or office, you can download and upload as much data as you want without having to worry about incurring additional charges.

2. Secure Data Transfer

Because your VPN connection is encrypted, all traffic between you and your private network is automatically secure — even when transferring data over unencrypted protocols like FTP.

3. Protect Sensitive Services

Even with strong passwords, leaving your databases and remote access services exposed to the outside world is asking for trouble. With SoftLayer, you don't have to take these risks. Simply configure sensitive services to only listen for connections from your private network, and use your secure VPN to access them.

If you run Linux or BSD, securing your SSH daemon is as easy as adding the line ListenAddress a.b.c.d to your /etc/ssh/sshd_config file (replace a.b.c.d with the IP address assigned to your private network interface)

4. Lock Down Your Server in Case of Emergency

In the unfortunate event of a security breach or major software bug, SoftLayer allows you to virtually "pull the plug" on your server, effectively cutting off all communication with the outside world.

The difference with the competition? Because you have a private network, you can still access your server over the VPN to work on the problem – all with the peace of mind that your server is completely off-limits until you're ready to bring it back online.

5. Remote Management

SoftLayer's dedicated servers sport a neat IP management interface (IPMI) which takes remote management to a whole new level. From reboots to power supply control to serial console and keyboard-video-mouse (KVM) access, you can do anything yourself.

Using tools like SuperMicro's IPMIView, you can connect to your server's management interface over the VPN to perform a multitude of low-level management tasks, even when your server is otherwise unreachable. Has your server shut itself off? You can power it back on. Frozen system? Reboot from anywhere in the world. Major crash? Feeling adventurous? Mount a CD-ROM image and use the KVM interface to install a new operating system yourself.

This list is just the beginning. Once you've gotten a taste of the infinite possibilities that come with having out-of-band access to your hosted environment, you'll never want to go back.

Now, go have some fun!

-Nick

July 30, 2010

One Size Doesn’t Fit All

All of my life I have been awkward in some way or another; afro in middle school, braces in high school, soccer player in a football loving town, not to mention that I have tripped at least once a day for roughly 11 years now. The most frustrating however, is that from middle school through high school, when clothing actually mattered, I was always too tall and too skinny to make any of the trendy threads fit me. Have you ever been able to find 28X33 jeans? I sure couldn’t. So I settled for 30X32’s and everyone enjoyed the visible cinch in my waist from my belt and a nice view of my sweet white tube socks shining from right below the cuff of my jeans.

When I was a kid, I could’ve sworn that we had a magical washing machine, because in the few instances when I actually found a nice small shirt that didn’t make me look like I had a blanket draped over my skinny bones, I went home, washed it like every normal human being does and presto chango, the drying machine would spit out a crop top circa 1980. Unless you are shopping for dress shirts or online, this predicament still haunts young, preteen boys such as my old self.

I’m sure that you can imagine my frustration as I always wished that someone would just offer more customization and options to provide me with a better fit. Instead of satisfying the customer, these companies limited their customer base by expecting the customer to bend and adjust to the options of the provider. But who is the servicer and who is supposed to getting serviced? Fortunately for those out there looking for cloud computing instances equipped with specific and varying RAM, CPU and storage needs, look no further. Softlayer Technologies now offers a revolutionary new service called Build Your Own Cloud (BYOC for short) that allows you to completely customize your cloud service and tailor it to your exact specifications. BYOC is featured in reviews by both Neovise and PCWorld. Neovise praises the new development stating that “This new ability to personalize the size and price of cloud servers can benefit every SoftLayer customer." The impressive part of this development is that we are the first and only ones offering such a service and once again we are on the cutting edge of technology and leading the way for other hosting services to follow. After all, in the words of Nathan Day, Softlayer’s CTO, “One thing we’ve learned along the way is that one size doesn’t fit all. (PCWorld)” So when considering who can meet your needs the best, just remember that at Softlayer, you can have it your way.

-Scott

Categories: 
Subscribe to services