Posts Tagged 'Best Practices'

May 10, 2013

Understanding and Implementing Coding Standards

Coding standards provide a consistent framework for development within a project and across projects in an organization. A dozen programmers can complete a simple project in a dozen different ways by using unique coding methodologies and styles, so I like to think of coding standards as the "rules of the road" for developers.

When you're driving in a car, traffic is controlled by "standards" such as lanes, stoplights, yield signs and laws that set expectations around how you should drive. When you take a road trip to a different state, the stoplights might be hung horizontally instead of vertically or you'll see subtle variations in signage, but because you're familiar with the rules of the road, you're comfortable with the mechanics of driving in this new place. Coding standards help control development traffic and provide the consistency programmers need to work comfortably with a team across projects. The problem with allowing developers to apply their own unique coding styles to a project is the same as allowing drivers to drive as they wish ... Confusion about lane usage, safe passage through intersections and speed would result in collisions and bottlenecks.

Coding standards often seem restrictive or laborious when a development team starts considering their adoption, but they don't have to be ... They can be implemented methodically to improve the team's efficiency and consistency over time, and they can be as simple as establishing that all instantiations of an object must be referenced with a variable name that begins with a capital letter:

$User = new User();

While that example may seem overly simplistic, it actually makes life a lot easier for all of the developers on a given project. Regardless of who created that variable, every other developer can see the difference between a variable that holds data and one that are instantiates an object. Think about the shapes of signs you encounter while driving ... You know what a stop sign looks like without reading the word "STOP" on it, so when you see a red octagon (in the United States, at least), you know what to do when you approach it in your car. Seeing a capitalized variable name would tell us about its function.

The example I gave of capitalizing instantiated objects is just an example. When it comes to coding standards, the most effective rules your team can incorporate are the ones that make the most sense to you. While there are a few best practices in terms of formatting and commenting in code, the most important characteristics of coding standards for a given team is consistency and clarity.

So how do you go about creating a coding standard? Most developers dislike doing unnecessary work, so the easiest way to create a coding standard is to use an already-existing one. Take a look at any libraries or frameworks you are using in your current project. Do they use any coding standards? Are those coding standards something you can live with or use as a starting point? You are free to make any changes to it you wish in order to best facilitate your team's needs, and you can even set how strict specific coding standards must be adhered to. Take for example left-hand comparisons:

if ( $a == 12 ) {} // right-hand comparison
if ( 12 == $a ) {} // left-hand comparison

Both of these statements are valid but one may be preferred over the other. Consider the following statements:

if ( $a = 12 ) {} // supposed to be a right-hand comparison but is now an assignment
if ( 12 = $a ) {} // supposed to be a left-hand comparison but is now an assignment

The first statement will now evaluate to true due to $a being assigned the value of 12 which will then cause the code within the if-statement to execute (which is not the desired result). The second statement will cause an error, therefore making it obvious a mistake in the code has occurred. Because our team couldn't come to a consensus, we decided to allow both of these standards ... Either of these two formats are acceptable and they'll both pass code review, but they are the only two acceptable variants. Code that deviates from those two formats would fail code review and would not be allowed in the code base.

Coding standards play an important role in efficient development of a project when you have several programmers working on the same code. By adopting coding standards and following them, you'll avoid a free-for-all in your code base, and you'll be able to look at every line of code and know more about what that line is telling you than what the literal code is telling you ... just like seeing a red octagon posted on the side of the road at an intersection.

-@SoftLayerDevs

March 20, 2013

Learntrail: Tech Partner Spotlight

We invite each of our featured SoftLayer Tech Marketplace Partners to contribute a guest post to the SoftLayer Blog, and this week, we're happy to welcome Daniel Hamilton, CTO of Learntrail. Learntrail is a learning management system for creating, assigning, and tracking e-learning programs. It helps you train your employees and develop a more effective workforce.

The Power of Great People

In 1995, Peter Drucker, one of the founding fathers of modern-day management, shared a profoundly simple idea: "People are our greatest asset." Today, almost two decades later, that quote is reiterated in one form or another by the top executives at the largest companies in the world. You can have the best product, a stellar marketing plan and the perfect vision, but without a great team of people to execute with those tools, your company isn't going anywhere.

In an online world now driven by innovation, it's easy to want to substitute "technology" for "people" as a business's greatest asset, but I'd argue that Peter Drucker's quote is as true now as it was in 1995. Think about it in terms of keeping your webiste online. Your server's hardware — a powerful CPU, ample storage space, tons of RAM and a fast network connection — might dictate how your website runs when everything is going smoothly, but when your traffic spikes over the holidays or an article on your blog goes viral, your ability to respond quickly to keep your website operational will be dictated by the quality of your server admins and support staff.

While good companies focus on improving their products, great companies focus on improving their people. In 2010, Google approached the challenge of improving its people by creating GoogleEDU — a program designed to formalize the process of educating employees in new skills, strategies and perspectives. Beyond building a stronger team of smarter individuals, Google is clearly investing in its employees, and that investment goes a long way to engender loyalty and job satisfaction.

What if your business doesn't happen to have Google's resources or a $269 billion market cap? That's the problem Learntrail set out to solve. Our platform was designed to make it easy for businesses to create stunning, full-featured multimedia courses that can be monitored and tracked in detail with a few clicks.

Learntrail Chalkboard

You can bring your new-hire orientation program online, centralize training documents for new products, or create simple lessons about company-specific procedures through a sleek, easy-to-use portal. You’ll also get real-time reports about your team’s progress, so you'll know exactly how your training is being used by your employees. To prove how confident we are that Learntrail will meet your needs, we have a risk-free, no credit card required 14-day trial that lets you kick the tires and get a feel for how Learntrail can work for your business.

Your people are your greatest asset.

-Daniel Hamilton, Learntrail

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
April 18, 2012

Dome9: Tech Partner Spotlight

This guest blog comes to us from Dave Meizlik, Dome9 VP of marketing and business development. Dome9 is a featured member of the SoftLayer Technology Partners Marketplace. With Dome9, you get secure, on-demand access to all your servers by automating and centralizing firewall management and making your servers virtually invisible to hackers.

Three Tips to Securing Your Cloud Servers

By now everyone knows that security is the number one concern among cloud adopters. But lesser known is why and what to do to mitigate some of the security risks ... I hope to shed a little light on those points in this blog post, so let's get to it.

One of the greatest threats to cloud servers is unsecured access. Administrators leave ports (like RDP and SSH) open so they can connect to and manage their machines ... After all, they can't just walk down the hall to gain access to them like with an on-premise network. The trouble with this practice is that it leaves these and other service ports open to attack from hackers who need only guess the credentials or exploit a vulnerability in the application or OS. Many admins don't think about this because for years they've had a hardened perimeter around their data center. In the cloud, however, the perimeter collapses down to each individual server, and so too must your security.

Tip #1: Close Service Ports by Default

Instead of leaving ports — from SSH to phpMyAdmin — open and vulnerable to attack, close them by default and open them only when, for whom, and as long as is needed. You can do this manually — just be careful not to lock yourself out of your server — or you can automate the process with Dome9 for free.

Dome9 provides a patent-pending technology called Secure Access Leasing, which enables you to open a port on your server with just one click from within Dome9 Central, our SaaS management console, or as an extension in your browser. With just one click, you get time-based secure access and the ability to empower a third party (e.g., a developer) with access easily and securely.

When your service ports are closed by default, your server is virtually invisible to hackers because the server will not respond to an attacker's port scans or exploits.

Tip #2: Make Your Security as Elastic as Your Cloud

Another key security challenge to cloud security is management. In a traditional enterprise you have a semi-defined perimeter with a firewall and a strong, front-line defense. In the cloud, however, that perimeter collapses down to the individual server and is therefore multiplied by the number of servers you have in your environment. Thus, the number of perimeters and policies you have to manage increases exponentially, adding complexity and cost. Remember, if you can't manage it, you can't secure it.

As you re-architect your infrastructure, take the opportunity to re-architect your security, keeping in mind that you need to be able to scale instantaneously without adding management overhead. To do so, create group-based policies for similar types of services, with role-based controls for users that need access to your cloud servers.

With Dome9, for example, you can create an unlimited number of security groups — umbrella policies applied to one or more servers and for which you can create user-based self-service access. So, for example, you can set one policy for your web servers and another for your SQL database servers, then you can enable your web developers to self-grant access to the web servers while the DBAs have access to the database servers. Neither, however, may be able to access the others' servers, but you — the super admin — can. Any new servers you add on-the-fly as you scale up your infrastructure are automatically paired with your Dome9 account and attached to the relevant security group, so your security is truly elastic.

Tip #3: Make Security Your Responsibility

The last key security challenge is understanding who's responsible for securing your cloud. It's here that there's a lot of debate and folks get confused. According to a recent Ponemon Institute study, IT pros point fingers equally at the cloud provider and cloud user.

When everyone is responsible, no one is responsible. It's best to pick up the reigns and be your best champion. Great cloud and hosted providers like SoftLayer are going to provide an abundance of controls — some their own, and some from great security providers such as Dome9 (shameless, I know) — but how you them is up to you.

I liken this to a car: Whoever made your car built it with safety in mind, adding seat belts and air bags and lots of other safeguards to protect you. But if you go speeding down the freeway at 140 MPH without a seatbelt on, you're asking for trouble. When you apply this concept to the cloud, I think it helps us better define where to draw the lines.

At the end of the day, consider all your options and how you can use the tools available to most effectively secure your cloud servers. It's going to be different for just about everyone, since your needs and use cases are all different. But tools like Dome9 let you self-manage your security at the host layer and allow you to apply security controls for how you use a cloud platform (i.e., helping you be a safe driver).

Security is a huge topic, and I didn't even scratch the surface here, but I hope you've learned a few things about how to secure your cloud servers. If the prospect of scaling out security policies across your infrastructure isn't particularly appealing, I invite you to try out Dome9 (for free) to see how easily you can manage automated cloud security on your SoftLayer server. It's quick, easy, and (it's worth repeating a few times...) free:

  1. Create a Dome9 account at https://secure.dome9.com/Account/Register?code=SoftLayer
  2. Add the Dome9 agent to your SoftLayer server
  3. Configure your policy in Dome9 Central, our SaaS management console

SoftLayer customers that sign up for Dome9 enjoy all the capabilities of Dome9 free for 30 days. After that trial period, you can opt to use either our free Lite Cloud, which provides security for an unlimited number of servers, or our Business Cloud for automated cloud security.

-Dave Meizlik, Dome9

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
February 28, 2012

14 Questions Every Business Should Ask About Backups

Unfortunately, having "book knowledge" (or in this case "blog knowledge") about backups and applying that knowledge faithfully and regularly are not necessarily one and the same. Regardless of how many times you hear it or read it, if you aren't actively protecting your data, YOU SHOULD BE.

Here are a few questions to help you determine whether your data is endangered:

  1. Is your data backed up?
  2. How often is your data backed up?
  3. How often do you test your backups?
  4. Is your data backed up externally from your server?
  5. Are your backups in another data center?
  6. Are your backups in another city?
  7. Are your backups stored with a different provider?
  8. Do you have local backups?
  9. Are your backups backed up?
  10. How many people in your organization know where your backups are and how to restore them?
  11. What's the greatest amount of data you might lose in the event of a server crash before your next backup?
  12. What is the business impact of that data being lost?
  13. If your server were to crash and the hard drives were unrecoverable, how long would it take you to restore all of your data?
  14. What is the business impact of your data being lost or inaccessible for the length of time you answered in the last question?

We can all agree that the idea of backups and data protection is a great one, but when it comes to investing in that idea, some folks change their tune. While each of the above questions has a "good" answer when it comes to keeping your data safe, your business might not need "good" answers to all of them for your data to be backed up sufficiently. You should understand the value of your data to your business and invest in its protection accordingly.

For example, a million-dollar business running on a single server will probably value its backups more highly than a hobbyist with a blog she contributes to once every year and a half. The million-dollar business needs more "good" answers than the hobbyist, so the business should invest more in the protection of its data than the hobbyist.

If you haven't taken time to quantify the business impact of losing your primary data (questions 11-14), sit down with a pencil and paper and take time to thoughtfully answer those questions for your business. Are any of those answers surprising to you? Do they make you want to reevaluate your approach to backups or your investment in protecting your data?

The funny thing about backups is that you don't need them until you NEED them, and when you NEED them, you'll usually want to kick yourself if you don't have them.

Don't end up kicking yourself.

-@khazard

P.S. SoftLayer has a ton of amazing backup solutions but in the interested of making this post accessible and sharable, I won't go crazy linking to them throughout the post. The latest product release that got me thinking about this topic was the SoftLayer Object Storage launch, and if you're concerned about your answers to any of the above questions, object storage may be an economical way to easily get some more "good" answers.

November 11, 2011

UNIX Sysadmin Boot Camp: Passwords

It's been a while since our last UNIX Sysadmin Boot Camp ... Are you still with me? Have you kept up with your sysadmin exercises? Are you starting to get comfortable with SSH, bash and your logs? Good. Now I have an important message for you:

Your password isn't good enough.

Yeah, that's a pretty general statement, but it's shocking how many people are perfectly fine with a six- or eight-character password made up of lowercase letters. Your approach to server passwords should be twofold: Stick with it and Be organized.

Remembering a 21-character password like ^@#*!sgsDAtg5t#ghb%!^ may seem daunting, but you really don't have to remember it. For a server, secure passwords are just as vital as any other form of security. You need to get in the habit of documenting every username and password you use and what they apply to. For the sake of everything holy, keep that information in a safe place. Folding it up and shoving it in your socks is not advised (See: blisters).

Want to make your approach to password security even better? Change your passwords every few months, and make sure you and at least one other trusted colleague or friend knows where to find them. You're dealing with sensitive material, but you can never guarantee that you will be available to respond to a server-based emergency. In these cases, your friends and co-workers end up scrambling through bookshelves and computer files to find any trace of useful information.

Having been one of the abovementioned co-workers in this situation, I can attest that it is nearly impossible to convince customer service that you are indeed a representative of the company having no verification information or passwords to provide.

Coming soon: Now you've got some of the basics, what about the not-so-basics? I'll start drafting some slightly more advanced tips for the slightly more advanced administrator. If you have any topics you'd like us to cover, don't hesitate to let us know in a comment below.

-Ryan

May 11, 2011

Acunote: Tech Partner Spotlight

This is a guest blog from Gleb Arshinov of Acunote, a SoftLayer Tech Marketplace Partner specializing in online project management and Scrum software.

Company Website: http://www.acunote.com
Tech Partners Marketplace: http://www.softlayer.com/marketplace/acunote

Implementing Project Management in Your Business

Project management has a bit of a stigma for being a little boring. In its simplest form, project management involves monitoring and reporting progress on a given initiative, and while it sounds simple, it's often an afterthought ... if it's ever a thought at all. Acunote is in the business of making project management easy and accessible for businesses of all sizes.

I've been in and around project management for years now, and while I could talk your ear off about Acunote, I'd rather share a few "Best Practices" for incorporating project management in your business. As you begin to understand how project management principles can be incorporated into your day-to-day activities, you'll be in a better position to understand the value proposition of tools like Acunote.

Track Planning, Not Just Execution
One of the biggest mistakes many companies make as they begin to incorporate project management is the tendency to track the progress on the execution of a project. While that aspect of the project is certainly the most visible, by monitoring the behind-the-scenes planning, you have a fuller view of where the project came from, where it is now and where it is expected to go in the future. It's difficult to estimate how long projects will take, and a lot of that difficulty comes from insufficient planning. By planning what will need to be done in what order, a bigger project becomes a series of smaller progress steps with planning and execution happening in tandem.

For many projects, especially for developers, it's actually impossible to predict most of what needs to get done upfront. That doesn't mean that there isn't a predictable aspect to a given project, though. Good processes and tools can capture how much of the work was planned upfront, how much was discovered during the project, and how the project evolved as a result. In addition to giving you direction as a project moves forward, documenting the planning and execution of a given project will also give you watermarks for how far the project has come (and why).

Use Tools and Resources Wisely
It's important to note that complexity of coordinating everything in a company increases exponentially as the company grows. With fewer than ten employees working on a project in a single department, you can probably get by without being very intentional in project management, but as you start adding users and departments that don't necessarily work together regularly, project management becomes more crucial to keep everyone on the same page.

The most effective project management tools are simple to implement and easy to use ... If a project management tool is a hassle to use, no one's going to use it. It should be sort of a "home base" for individual contributors to do their work efficiently. The more streamlined project management becomes in your operating practices, the more data it can generate and the more you (and your organization's management team) can learn from it.

Make Your Distributed Team Thrive
More and more, companies are allowing employees to work remotely, and while that changes some of the operations dynamics, it doesn't have to affect productivity. The best thing you can do to manage a thriving distributed team is to host daily status meetings to keep everyone on the same page. The more you communicate, the quicker you can adjust your plans if things move off-track, and with daily meetings, someone can only be a day behind their expectations before the project's status is reevaluated. With many of the collaboration tools available, these daily meetings can be accompanied by daily progress reports and real-time updates.

Acunote is designed to serve as a simple support structure and a vehicle to help you track and meet your goals, whether they be in development, accounting or marketing. We're always happy to help companies understand how project management can make their lives easier, so if you have any questions about what Acunote does or how it can be incorporated into your business, let us know: support@acunote.com

-Gleb Arshinov, Acunote

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
March 21, 2011

7 Steps to Server Migration Success

It's been a long journey: Four years ago you paid a premium for your humble domain, and things have changed a lot since then. You want to move to a newer, cheaper, nicer place, but you dread the process of collecting all of your stuff and moving it somewhere else. What's the best way to pack it up? Will it be safe during the move? What can I throw away to make this migration easier? What about your mail? You don't want to miss anything in the midst of your move. Doesn't this sound like the last time you moved to another house? The funny thing is that while all of those questions could be describing a physical move, we're actually talking about migrating web servers.

At some point, you'll have to face moving from one server to another. Hopefully it's in the same "neighborhood" or network since that will make the speed of the move a lot faster and less expensive ... especially if the neighborhood has free private network traffic and incoming bandwidth like ours </plug>! Regardless of where you're moving your data, there are seven key steps to preparing and executing a successful server migration:

1. Prepare Your DNS
When you move your site(s) to a new server, you will likely get new IP addresses. With the advent of DNS caching, once you change your IP, it can take up to seven days before the changes propagate throughout the Internet. To keep this from happening, your first step in preparing for the migration is to change your DNS record TTL (Time To Live). This value designates how long your DNS entries should be cached.

It's best to do this step several days before you plan to move. I suggest you do it at least a week in advance to cover at least 95% to 99% of your traffic. I would also change or remove any SPF records if you have any. Details: http://en.wikipedia.org/wiki/Sender_Policy_Framework

2. Set up Your New Server
Make sure your new server has the correct operating system installed and ready and that all hardware meets your applications' specifications. Decide how you wish to provision your site's IP addresses and make note of any differences.

3. Tune Your Server
Check your file system and make sure your partitions are set up as you need them. Set up RAID if required. Most hosts will set up RAID and your partitions for you and even provide you with test results of the hardware ... At least we do </another plug>! This is also the perfect time to implement any security practices within the OS and firewall (if installed). It's important to complete these steps before you get too far because they're much easier to do without content on the server.

4. Install Base Applications
Once you verify your server configuration, set up your operating system and secure your new server, it's time to install the supporting software you plan to us. Examples include webserver, email server and database server software and any application server software. Prepare a sample web page to tests that each of the pieces of software is installed correctly to confirm functionality.

5. Begin Data Migration
Now you're ready to do an initial data migration. Due to the enormous variance in types of data, kinds of servers, amounts of data and applications, how you proceed with this step can vary dramatically. Databases might require a backup and restore process while static data may only require the use of a tool like rsync.

The best way to complete this step is to do it during off-peak times. Understand how long it'll take to move all of your data, and set aside a conservative window to complete the move.

Once the data has been migrated, you should be able to test your website application at its newly assigned IP address.

6. Move from Old to New
Now that you've extensively tested your new server, it's time to set an officical move date and time. By now, your DNS changes have taken hold (assuming you changed them a week ago), and you are ready to throw the switch on your new infrastructure. Depending on the nature and size of your site, you might want to notify users of a maintenance window since service might be temporarily interrupted in this process.

During this window, you'll complete five tasks:

  1. Take down your site on the old server. You might want to put up a maintenance page to let people know about the scheduled work being performed.
  2. Migrate database changes and / or data changes.
  3. Confirm that your site is working properly on new server via the IP address.
  4. Change your DNS records to resolve to your new IP address.
  5. Remove the server maintenance page and redirect traffic from that page to the new server.

Once these steps have been completed, the new server will have up-to-the-minute data, and all new traffic receiving the current DNS information will be sent to your new server. All traffic that has old DNS information will be sent to the old server and redirected to the new server. This allows for all traffic to be delivered to the new server regardless of what may be cached DNS.

7. Enable / Recreate Automated Site Maintenance Jobs
To complete the migration process, you should enable or recreate any automated site maintenance jobs you may have had running on the old server. At this point, you can change your TTL values back to the default, and if you disabled an SPF record, you may restore it after a few days once you are comfortable that the Internet recognizes your new IP address for your domain.

This migration framework should be considered a very high-level recommendation to facilitate most standard server migrations, so if your architecture is more complex or you have additional configuration requirements, it might not cover everything for your migration. Migrations can be daunting, but if you plan for them and take your time, your site will be up and running on a new server in no time at all. If you have problems in the migration process or have questions about how to best handle your specific migration, make sure to have a professional sysadmin on call ... So just keep SoftLayer's number handy </last SoftLayer plug>.

-Harold

January 19, 2011

AJAX Without XML HTTP Requests

What is AJAX?

Asynchronous JavaScript and XML - AJAX - is what you use to create truly dynamic websites. Ajax is the bridge between application and presentation layers, facilitating lightning fast, instant application of data from the end user to the host and back to the end user. It dynamically changes the data displayed on the page without disrupting the end user or bogging down the client. Although the name is misleading, it is used as a term for any process that can change the content of a web page without unnecessarily reloading other parts of the page.

What are XML HTTP requests?

Passing information from your server to your end user's browser is handled over HTTP in the form of HTML. The browser then takes that info and formats it in a way the end user can view it easily. What if we want to change some of the data in the HTML without loading a whole new HTML document? That's where XML comes in. Your web page needs to tell the browser to ask for the XML from the server; luckily, all browsers have a function called XmlHttpRequest(). Once it's called, it will poll the server for XML data.

Why shouldn't you use XML HTTP requests?

A long time ago, in a galaxy far, far away, Microsoft invented the XmlHttpRequest() object for Microsoft Exchange Server 2000. As with all first generation technologies, everyone wanted to use it, and some people implemented it differently. IE didn't even have native support until 2006, and there are still some discrepancies in various browsers when studying the OnReadyStateChange event listener. There is also an issue with cross-domain requests. When the internet was young, JavaScript hackers would steal users' identity by pulling information from secure websites and posting it to their own, stealing bank account numbers, credit cards, etc. Now that the internet has grown up a bit, people with large networks and many servers have found use for sending data across domains, but it's still not possible with XML HTTP requests.

What's an Alternative?

Using JavaScript, you can create client side scripts whose source is built with server side scripts, passing variables in the URL. Here's an example of a basic web page with local JavaScript, a few checkboxes for human interaction, and a table with some information that we want to change. View source on the page below to see the outline.


Looking at the three JavaScript functions, the first (clearTags) automatically clears out the checkboxes on load, the second (check(box)) makes sure that only one box is checked at a time, the third (createScript) is the interesting one; it uses the createElement() function to create an external JavaScript, the source of which is written in PHP. I have provided a sample script below to explain what I mean. First, we get the variable from the URL using the $_GET super global. Then, we process the variable with a switch, but you might use this opportunity to grab info from a database or other program. Finally, we print code which the browser will translate to JavaScript and execute.

<code>&lt;?PHP
//First we get the variable from the URL
$foo=$_GET['foo'];
//Here's the switch to process the variable
switch ($foo){
case 'foo' : print "var E=document.getElementById('data'); E.innerHTML='bar'; "; break;
case 'fooo' : print "var E=document.getElementById('data'); E.innerHTML='barr'; "; break;
case 'ffoo' : print "var E=document.getElementById('data'); E.innerHTML='baar'; "; break;
case 'ffooo' : print "var E=document.getElementById('data'); E.innerHTML='baarr'; "; break;
default : print "var E=document.getElementById('data');
E.innerHTML='unknown'; ";
}
?&gt;
</code>

-Kevin

June 3, 2010

Skinman's Guide to Social Media

1. Can your company benefit from Social Media?
Yes! I think all companies can. From a point of branding or brand awareness the social media outlets can really give you some value. It can be additional website traffic, company transparency, or actual specials and sales but let’s face it the more people that see your name on the internet the better.

2. What is considered Social Media Spam?
To Spam you could use these tactics. http://en.wikipedia.org/wiki/Social_networking_spam but don’t. You should be personable in sending your messages and don’t overdo it. Sure you can send a special or an interesting fact a few times especially if you have customers worldwide. You can always use the time zone excuse because most social media posts aren’t sticky and will be easily overlooked. The key is not using scripts to do your work for you.

3. What are some good tools to help?
I live on Hootsuite. www.hootsuite.com . This allows you to queue up tweets, Facebook status posts, and linked in conversations and I am sure there are more options on the way. Am I contradicting myself? No, because you still have to type in your updates and then schedule them according to your time zone needs. There are other great tools within Hootsuite for link clickthrough metrics and savable searches so you can keep track of what people are saying about you and also what your competitors are up to and what people think of them as well. It has a built in URL shrinking and photo uploading option also. You can have multiple users and granular security for those users. All in all, Hootsuite is a very valuable free tool for corporate social media.

4. If you get some bad feedback what should you do?
Take a deep breath, put on your big kid pants, layer on some thick skin and then think about your response and what you might say. Then take another deep breath, re-read your response 3 or 4 times and then try to make contact privately if possible. See if there is something you could have done better as sometimes constructive criticism can really help your company. If your attempts to make contact privately fail then you have to decide if a public response is necessary. Sometimes this can be a good idea and sometimes it is better to just let it fade. You have to use a little common sense on this one. If there are multiple posters on the same issue then a public response can be a great thing. If it is a single angry poster and the private requests fail then it is probably just better to let it go away on its own.

5. To support or not support?
I firmly believe that social media and social support/customer service are two very different things. The twitter account for SoftLayer is www.twitter.com/softlayer and I try to have a little fun, show a little transparency to our fans and customers, offer a special occasionally, but mainly try to get some traffic to our corporate website. I try to stay far away from customer support and only do light customer service. We have many other traditional ways to get support and service that our customers need to continue to use. In my book, if a customer has to resort to social media to get some attention from our sales or customer service teams, then we have already failed.

6. Have a little fun, have a personality
Now that you have the tools and know what to do and what not to do, have a little fun. Have a scavenger hunt, send out some swag, make a few friends get some followers and get to tweeting. Personality can go a long way in getting people interested in what you and your company are up to. Once you get it going it just becomes more and more fun. Look at the bright side there are much worse jobs you could have in the world.

-Skinman

June 4, 2008

Wait … Back up. I Missed Something!

I’ve been around computers all my life (OK, since 1977 but that’s almost all my life) and was lucky to get my first computer in 1983.

Over the summer of 1984, I was deeply embroiled in (up to that point) the largest programming project of my life, coding Z80 ASM on my trusty CP/M computer when I encountered the most dreaded of all BDOS errors, “BDOS ERROR ON B: BAD SECTOR”

In its most mild form, this cryptic message simply means “copy this data to another disk before this one fails.” However, in this specific instance, it represented the most severe case… “this disk is toast, kaputt, finito, your data is GONE!!!”

Via the School of Hard Knocks, I learned the value of keeping proper backups that day.
If you’ve been in this game for longer than about 10 milliseconds, it’s probable that you’ve experienced data loss in one form or another. Over the years, I’ve seen just about every kind of data loss imaginable, from the 1980’s accountant who tacked her data floppy to the filing cabinet with a magnet so she wouldn’t misplace it-- all the way to enterprise/mainframe class SAN equipment that pulverizes terabytes of critical data in less than a heartbeat due to operator error on the part of a contractor.

I’ve consulted with thousands of individuals and companies about their backup implementations and strategies, and am no longer surprised by administrators who believe they have a foolproof backup utilizing a secondary hard disk in their systems. I have witnessed disk controller failures which corrupt the contents of all attached disk drives, operator error and/or forgetfulness that leave gaping holes in so-called backup strategies and other random disasters. On the other side of the coin, I have personally experienced tragic media failure from “traditional backups” utilizing removable media such as tapes and/or CD/DVD/etc.

Your data is your life. I’ve waited up until this point to mention this, because it should be painfully obvious to every administrator, but in my experience the mentality is along the lines of “My data exists, therefore it is safe.” What happens when your data ceases to exist, and you become aware of the flaws in your backup plan? I’ll tell you – you go bankrupt, you go out of business, you get sued, you lose your job, you go homeless, and so-on. Sure, maybe those things won’t happen to you, but is your livelihood worth the gamble?

“But Justin… my data is safe because it’s stored on a RAID mirror!” I disagree. Your data is AVAILABLE, your data is FAULT TOLERANT, but it is not SAFE. RAID controllers fail. Disaster happens. Disgruntled or improperly trained personnel type ‘rm –rf /’ or accidentally select the wrong physical device when working with the Disk Manager in Windows. Mistakes happen. The unforeseeable, unavoidable, unthinkable happens.

Safe data is geographically diverse data. Safe data is up-to-date data. Safe data is readily retrievable data. Safe data is more than a single point-in-time instance.

Unsafe data is “all your eggs in one basket.” Unsafe data is “I’ll get around to doing that backup tomorrow.” Unsafe data is “I stored the backups at my house which is also underwater now.” Unsafe data is “I only have yesterday’s backup and last week’s backup, and this data disappeared two days ago.”

SoftLayer’s customers are privileged to have the option to build a truly safe data backup strategy by employing the Evault option on StorageLayer. This solution provides instantaneous off-site backups and efficiently utilizes tight compression and block-level delta technologies, is fully automated, has an extremely flexible retention policy system permitting multiple tiers of recovery points-in-time, is always online via our very sophisticated private network for speedy recovery, and most importantly—is incredibly economical for the value it provides. To really pour on the industry-speak acronym soup, it gives the customer the tools for their BCP to provide a DR scenario with the fastest RTO with the best RPO that any CAB would approve because of its obvious TCR (Total Cost of Recovery). Ok, so I made that last one up… but if you don’t recover from data loss, what does it cost you?

On my personal server, I utilize this offering to protect more than 22 GB of data. It backs up my entire server daily, keeping no less than seven daily copies representing at least one week of data. It backs up my databases hourly, keeping no less than 72 hourly copies representing at least three days of data. It does all this seamlessly, in the background, and emails me when it is successful or if there is an issue.

Most importantly, it keeps my data safe in Seattle, while my server is located in Dallas. Alternatively, if my server were located in Seattle, I could choose for my data to be stored in Dallas or our new Washington DC facility. Here’s the kicker, though. It provides me the ability to have this level of protection, with all the bells and whistles mentioned above, without overstepping the boundary of my 10 GB service. That’s right, I have 72 copies of my database and 7 copies of my server, of which the original data totals in excess of 22 GB, stored within 10 GB on the backup server.

That’s more than sufficient for my needs, but I could retain weekly data or monthly data without significant increase in storage requirements, due to the nature of my dataset.
This service costs a mere $20/mo, or $240/yr. How much would you expect to pay to be able to sleep at night, knowing your data is safe?

Are you missing something? Wait … Backup!

-Justin

Subscribe to best-practices