Posts Tagged 'Windows'

May 4, 2009

Paradigm Shift

From the beginning of my coming of age in the IT industry, It’s been one thing – Windows. As a system administrator in a highly mobile Windows environment, you learn a thing or two to make things tick, and to make them keep ticking. I had become quite proficient with the Active Directory environment, and was able to keep a domain going. While windows is a useful enterprise-grade server solution, it’s certainly not the only solution. Unfortunately when I made my departure from that particular environment, I hadn’t had much exposure to the plethora of options available to an administrator.

Then Along comes SoftLayer, and opens my eyes to an array of new (well, at least to me) operating systems. Now, I had begun my ‘new’ IT life, with exposure to the latest and greatest, to include Windows, as well as virtualization software such as Xen and Virtuozzo, and great open source operating systems such as CentOS, and FreeBSD. With the new exposure to all these high-speed technologies, I felt that maybe it was time for me to let the de-facto home operating system take a break, and kick the tires on a new installation.

I can say that while switching to open source was a bit nerve racking, it ended up being quick and painless, and I’m not looking back. I’ve lost a few hours of sleep here and there trying to dive in and learn a thing or two about the new operating system, as well as making some tweaks to get it just like I like it. The process was certainly a learning experience, and I’ve become much more familiar with an operating system that, at first, can seem rather intimidating. I went through a few different distributions till I settled on one that’s perfect for what I do (like reading the InnerLayer, and finishing the multitude of college papers).

The only problem with always reloading a PC is you have to sit there and watch it. It doesn’t hurt to have a TV and an MP3 player sitting around while you configure everything and get the reload going, but you still have to be around to make sure everything goes as planned. Imagine this… You click a button, and check back in a few. Sound Familiar? Yep, it would have been nice to have an automated reload system much like we have here at SoftLayer. Not to mention, if something goes awry, there’s the assurance that someone will be there to investigate and correct the issue. That way, I can open a cold one, and watch the game, or attend to other matters more important than telling my computer my time zone.

May 1, 2009

What A Cluster

When you think about all the things that have to go right all the time where all the time is millions of times per second for a user to get your content it can be a little... daunting. The software, the network, the hardware all have to work for this bit of magic we call the Internet to actually occur.

There are points of failure all over the place. Take a server for example: hard drives can fail, power supplies can fail, the OS could fail. The people running servers can fail.. maybe you try something new and it has unforeseen consequences. This is simply the way of things.

Mitigation comes in many forms. If your content is mostly images you could use something like a content delivery network to move your content into the "cloud" so that failure in one area might not take out everything. On the server itself you can do things like redundant power supplies and RAID arrays. Proper testing and staging of changes can help minimize the occurrence of software bugs and configuration errors impacting your production setup.

Even if nothing fails there will come a time when you have to shut down a service or reboot an entire server. Patches can't always update files that are in use, for example. One way to work around this problem is to have multiple servers working together in a server cluster. Clustering can be done in various ways, using Unix machines, Windows machines and even a combination of operating systems.

Since I've recently setup a Windows 2008 cluster that is we're going to discuss. First we need to discuss some terms. A node is a member of a cluster. Nodes are used to host resources, which are things that a cluster provides. When a node in a cluster fails another node takes over the job of offering that resource to the network. This can be done because resources (files, IPs, etc) are stored on the network using shared storage, which is typically a set of SAN drives to which multiple machines can connect.

Windows clusters come in a couple of conceptual forms. Active/Passive clusters have the resources hosted on one node and have another node just sitting idle waiting for the first to fail. Active/Active clusters on the other hand host some resources on each node. This puts each node to work. The key with clusters is that you need to size the nodes such that your workloads can still function even if there is node failure.

Ok, so you have multiple machines, a SAN between them, some IPs and something you wish to serve up in a highly available manner. How does this work? Once you create the cluster you then go about defining resources. In the case of the cluster I set up my resource was a file share. I wanted these files to be available on the network even if I had to reboot one of the servers. The resource was actually combination of an IP address that could be answered by either machine and the iSCSI drive mount which contained the actual files.

Once the resource was established it was hosted on NodeA. When I rebooted NodeA though the resource was automatically failed over to NodeB so that the total interruption in service was only a couple of seconds. NodeB took possession of the IP address and the iSCSI mount automatically once it determined that NodeA had gone away.

File serving is a really basic example but you can clustering with much more complicated things like the Microsoft Exchange e-mail server, Internet Information Server, Virtual Machines and even network services like DHCP/DNS/WINs.

Clusters are not the end of service failures. The shared storage can fail, the network can fail, the software configuration or the humans could fail. With a proper technical staff implementing and maintaining them, however, clusters can be a useful tool in the quest for high availability.

October 24, 2008

Pushing the Microsoft Kool-Aid

Recently on one of our technical forums I contributed to a discussion about the Windows operating system. One of our director’s saw the post and thought it might be of interest to readers of the InnerLayer as well. The post focused on the pros and cons of Windows 2008 from the viewpoint of a systems / driver engineer (aka me). If you have no technical background, or interest in Microsoft operating system offerings, what follows probably will not be of interest to you—just the same, here is my two cents.

Microsoft is no different than any other developer when it comes to writing software--they get better with each iteration. There is not a person out there who would argue that the world of home computers would have been better off if none of us ever progressed beyond MS-DOS 1.0. Not to say there is anything wrong with MS-DOS. I love it. And still use it occasionally doing embedded work. But my point is that while there have certainly been some false starts along the way (can you say BOB), Microsoft's operating systems generally get better with each release.

So why not go out and update everything the day the latest and greatest OS hits the shelves? Because as most of you know, there are bugs that have to get worked out. To add to that, the more complex the OS gets, the more bugs there are and the more time it takes to shake those bugs out. Windows Server 2008 is no different. In my experience there are still a number of troublesome issues with W2K8 that need to be addressed. Just to name a few:

  • UAC (user access control) - these are the security features that give us so much headache. I'm not saying we don't need the added security. I'm just saying this is a new arena for MS and they still have a lot to learn. After clicking YES, I REALLY REALLY REALLY WANT TO INSTALL SAID APPLICATION for the 40th time in a day, most administrators will opt to disable UAC, thereby thwarting the added security benefits entirely. If I were running this team at MS I'd require all my developers to take a good hard look at LINUX.
  • UMD (user mode drivers) - the idea of running a device driver, or a portion of a device driver, in the restricted and therefore safe user memory of the kernel is a great idea in terms of improving OS reliability. I've seen numbers suggesting that as many as 90% of hard OS failures are caused by faulty third-party drivers mucking around in kernel mode. However implementing user mode drivers adds some new complexities if hardware manufacturers don't want to take a performance hit and from my experience not all hardware vendors are up to speed yet.
  • Driver Verification - this to me is the most troublesome and annoying issue right now with the 64-bit only version of W2K8. Only kernel mode software that has been certified in the MS lab is allowed to execute on a production boot of the OS. Period. Since I am writing this on the SoftLayer blog, I am assuming most of you are not selecting hardware and drivers to run on your boxes. We are handling that for you. But let me tell you it’s a pain in the butt to only run third party drivers that have been through the MS quality lab. Besides not being able to run drivers we have developed in house it is impossible for us to apply a patch from even the largest of hardware vendors without waiting on that patch to get submitted to MS and then cleared for the OS. A good example was a problem we ran into with an Intel Enet driver. Here at SoftLayer we found a bug in the driver and after a lot of back and forth with Intel's Engineers we had a fix in hand. But that fix could not be applied to the W2K8 64-bit boxes until weeks later when the fix finally made it from Intel to MS and back to Intel and us again. Very frustrating.

Okay, so now that you see some of the reasons NOT to use MS Windows Server 2008 what are some of the reasons it’s at least worth taking a look at? Well here are just a few that I know of from some of the work I have done keeping up to speed with the latest driver model.

  • Improved Memory Management – W2K8 issues fewer and larger disk I/O's than its 2003 predecessor. This applies to standard disk fetching, but also paging and even read-aheads. On Windows 2003 it is not uncommon for disk writes to happen in blocks
  • Improved Data Reliability - Everyone knows how painful disk corruption can be. And everyone knows taking a server offline on a regular basis to run chkdsk and repair disk corruption is slow. One of the ideal improvements in terms of administering a websever is that W2K8 employs a technology called NTFS self-healing. This new feature built into the file system detects disk corruption on the fly and quarantines that sector, allowing system worker-threads to execute chkdsk like repairs on the corrupted area without taking the rest of the volume offline.
  • Scalability - The W2K8 kernel introduces a number of streamlining factors that greatly enhance system wide performance. A minor but significant change to the operating system's low level timer code, combined with new I/O completion handling, and more efficient thread pool, offer marked improvement on load-heavy server applications. I have read documentation supporting claims that the minimization in CPU synchronization alone results directly in a 30% gain on the number of concurrent Windows 2008 users over 2003. That's not to say once you throw in all the added security and take the user mode driver hit you won't be looking at 2003 speeds. I'm just pointing out hard kernel-level improvements that can be directly quantified by multiplying your resources against the number of saved CPU cycles.

Alright, no need to beat a dead horse. My hope was if nothing else to muddy the waters a bit. The majority of posts I read on our internal forums seemed to recommend avoiding W2K8 like the plague. I'm only suggesting while it is certainly not perfect, there are some benefits to at least taking it for a test drive. Besides, with SoftLayer's handy dandy portal driven OS deployment, in the amount of time it took you to read all my rambling you might have already installed Windows Server 2008 and tried it out for yourself. Okay, maybe that's a bit of an exaggeration. But still...you get the idea!

-William

Categories: 
July 24, 2008

Here's to Bill

Bill Gates' final day as an employee of Microsoft was June 27, 2008. Let's all raise our virtual glasses in a toast! Or maybe a virtual fist-bump is better - here you go: III!

I had intended to type this up in time for Mr. Gates' last day, but just simply didn't have time. This marks a historic change at the software behemoth in Washington. Love him or hate him (and there are many people on each side), few people truly realize the impact he has had on the world as we know it.

I love the fact that in America, you can get a crazy and creative idea and run with it. Gates realized that Intel's 8080 chip released in April 1974 was the first affordable chip that could run BASIC in a computer that could be small enough to be classified as a "personal" computer. Then he read an article in the January '75 issue of Popular Electronics about a microcomputer called the Altair 8800 made by Micro Instrumentation and Telemetry Systems (MITS), which ran on an Intel 8080. Realizing that he had to seize the moment because the timing would never be right again, Gates took a leave of absence as a student at Harvard and contacted MITS about developing a BASIC interpreter for that machine. He collaborated with Paul Allen to prepare demo software and close the deal, then he and Paul Allen formed a company named "Micro-soft." The hyphen was dropped in 1976.

Can we imagine what our world would be like had Gates missed reading that magazine in January ‘75? Or if he had decided to finish school and become a lawyer as his parents had hoped? I can't imagine what technology I'd be using to produce documents like this today if Gates and Allen didn't follow through on their crazy idea in 1975.

To get an idea of how deeply Bill Gates has influenced us today, just try either running a business or doing your job without interacting with a computer. If it's not impossible, it's very very difficult at best. Next, try running the computers for your business without ANY Microsoft products. Again, this is difficult but not totally impossible. Then, try interacting with other businesses that use Microsoft products. If you're then successful doing that, think of how many of your daily activities involve a Microsoft product.

I actually worked for a boss in the mid-90's who hated Microsoft. He ran IBM OS/2 operating systems and non-Microsoft applications (Word Perfect, Quattro Pro spreadsheets, etc.). He didn't want to be reminded that Gates originally helped develop OS/2 in partnership with IBM. When IBM dropped support for OS/2, my boss capitulated and migrated to Windows.

At SoftLayer, we use and support a lot of non-Microsoft products. But we couldn't do what we do today without Microsoft products, and many of our customers demand Microsoft products.
In typical American entrepreneurial fashion, SoftLayer started with some semi-crazy ideas to connect the dots between different products in creative ways that had not been previously done. We will do well to have a fraction of the impact that Bill Gates has made.

-Gary

Categories: 
Subscribe to windows