Author Archive: Justin Scott

November 26, 2010

The End is Near!

On February 4th, 2009, I told you that IPv4 address space is running out and that IPv6 is here to replace it.

As of this writing, there are about 218 days worth of IPv4 address space remaining, and the usage rate is still accelerating. Before you know it, there won’t be any more new IP space to allocate, and between now and then you will see much more strict rules applied to handing out addresses.

Of course these rules are not imposed by SoftLayer, but by IANA, the Internet Assigned Numbers Authority. IANA controls IPv4 address space that is doled out to regional registries. IANA already imposes some pretty hefty regulations on how and when IPv4 space is handed out, but the regional registries in some cases are even more difficult to obtain addresses from.

This is by design, and is (despite frustrations otherwise) a good thing. If IANA had not put regulations in place or tightened the regulations as we went along, we would have run out of addresses quite a long time ago. Long before IPv6 was ready. Just to put it into perspective- there are more internet-connected devices in the world today than there are IPv4 addresses and an estimated 22 billion by 2020.

As I mentioned 21 months ago, SoftLayer has native IPv6 support on all networks in all datacenters. We also give you IPv6 address space in large chunks, and free of charge.

Since then, my home ISP provides me native IPv6 support across the wire, and my home PCs all have IPv6 addresses on their interfaces. In the event that websites or network services report an “AAAA” record in DNS, my systems at home prefer the IPv6 path over the IPv4 path. My personal servers share an IPv6 /64 subnet.

While the address space is waning, IPv4 isn’t going to die because of it. Not yet, at least. As more people adopt IPv6, it will tend to free up IPv4 address space for those of us who still enjoy playing old games or using old software that cannot or will not ever be updated for the new protocol.

Before the addresses run out, before new sites come online that are forced to use IPv6 with no native IPv4 access, check to see if your ISP for your home or business is already providing you native IPv6 capability. If they don’t, pick up a phone and ask why. If they don’t know, choose a new provider.

SoftLayer already has you covered. And we have a countdown timer on the home page www.softlayer.com to keep you up to date.

The end is near!!! (For IPv4 at least)

-Justin

February 13, 2009

1234567890

Do you remember that song from Sesame Street? The lyrics were so catchy that very few people who grew up watching it have forgotten the song. You don’t even need to look up the lyrics, everyone knows them even if they’ve never watched the show.

1,2,3,4,5… 6,7,8,9,10… 11,12!

If you’re not aware, all UNIX and UNIX-like operating systems keep their time in a format known as “epoch time”. This is the number of seconds since January 1, 1970 00:00:00 GMT. Regardless of your timezone, your UNIX machine should show the same number of seconds as every other UNIX machine in the world. This clock is based off of GMT, and your local timezone settings simply interpret this epoch time based on your local timezone.

So what’s that have to do with the price of beans?

Well, today is an interesting day for the epoch timestamp. Friday, February 13 2009 at 23:31:30 GMT, the epoch timestamp will read 1234567890.

So how can you be sure that your UNIX (or windows machine) has accurate time? Well, if you have a SoftLayer server, you can simply point your ntp client to “servertime.service.softlayer.com”. This traffic then passes over the back-end private network, which has unlimited bandwidth, and you won’t consume your precious public-facing bandwidth to keep your server’s time accurate to within milliseconds. Just like every other NTP server on the internet, ours sync up constantly throughout the day with various atomic clocks around the world. You can’t get much more accurate than that, at least without having your own little chunk of the radioactive element cesium inside your computer. Incidentally, this is the same thing that makes your GPS system work. Hundreds of satellites overhead, which are basically nothing more than cesium clocks with transmitters that constantly broadcast the current time.

It’s just another one of those cool things that we do for our customers to help them get the most out of their server without having all the bare essentials stack up against their monthly bandwidth allocation.

Neat,huh?

Categories: 
Keywords:
Categories:
February 4, 2009

Brought to You by the Number “6”

Most of us may not realize, but over a decade ago, the Postal Service determined they are unable to assign addresses for every home and business anymore. You may not have even noticed that they began revoking unique addresses for individual postal customers. They replaced your address with a shared address, one that changes periodically and limits your ability to interact with postal customers all over the world.

Today, unbeknownst to you, when you send a package to your favorite receiver, they no longer receive it at their unique delivery location. It is first sent to a location that is shared by them and dozens (even hundreds) of nearby businesses, where someone reads the recipient’s name and delivers the package to the right location. In fact, because of a similar process in your neighborhood, that shipper couldn’t send you a package until after you sent one to them first. Even though their package has your name on it, the postal service just throws it in the trash because it has no record of you ever sending them something first.

Ok, enough of the fuzzy convoluted metaphor… I’m not talking about the postal service, rather the Internet.

Today there is a high probability that when you request a website from your browser, you are actually sending a request to a shared IP on a server that hosts several websites. The server must then look to see which site your request was for, and behave accordingly. Likewise, on your end of the connection, you are probably using a Network Address Translation (NAT) gateway, which permits you to have multiple computers on your network which all share one IP address on the Internet. This gateway won’t let anyone contact you unless you’ve contacted them first. On top of that, your IP address probably changes every few hours or days, and this makes it difficult to contact your computer remotely, even if you’ve set everything up to accept certain types of unsolicited connections.

Today’s Internet as we know it, has 4,294,967,296 IP addresses. This is called IPv4. This is not enough to populate everyone’s PDA telephone, computers in the home and office, every website and every network device with a unique and unchanging permanent address. Imagine, if you will, that your mobile phone number changed every few hours, and you could not receive a call without making one first.

As early as 1993, the engineers who are responsible for all the “magic arrows” under the hood of the Internet began discussing and constructing a plan to save ourselves from running out of internet addresses. They wanted to get this in place, of course, before we started putting IPs on everything such as our televisions, DVRs, refrigerators, toasters, cars, phones, etc. As of January 21st, SoftLayer made an important announcement. We are now delivering our customers the result over more than a decade of engineering work. Welcome to the “New Internet”, IPv6.

Why is IPv6 so much better? At the risk of sounding like I’m making a gross overstatement, we will never have to worry about IP address space again. Recall I told you that the Internet as we know it today has “only” 4,294,967,296 unique addresses. IPv6 has 340,282,366,920,938,463,463,374,607,431,768,211,456 (2^128) unique addresses. If you want to sound smart and confuse your colleagues, you can tell them that there are more than 340 undecillion IP addresses in IPv6. That’s a just a tiny bit more than 4 billion.

It’s been said there are enough bits in IPv6 that we could assign a unique IP address to every atom covering the surface of the earth, and still have enough left over to address every surface atom of 100+ more earths.

The default IP allocation for IPv6 users is a “/64” subnet. There are 18,446,744,073,709,551,616 IPs in a subnet this size. Yes, it’s a larger number… but it’s more complex than that. That number is equivalent to as many IPv4 networks as there are unique IPs in the IPv4 specification. That’s right. 18,446,744,073,709,551,616 / 4,294,967,296 = 4,294,967,296. Your default allocation is equivalent to 4 billion times the entire IP space of today’s IPv4 Internet.

Some readers may think, “That’s fine, but there have been IPv6 addresses in use for years, what makes SoftLayer’s offering so remarkable?” Well I’m glad you asked. Unlike traditional IPv6 allocations, which tunnel the IPv6 protocol over IPv4 to a location that can actually use IPv6, SoftLayer provides native IPv6 support to the Internet. There is no middle man. Your IPv6 traffic passes to the end user over the same superior network as any IPv4 packet in our datacenters.

Looks like a challenge to me. Who can be first to host 18 quintillion websites on their server?

Categories: 
Keywords:
Categories:
June 4, 2008

Wait … Back up. I Missed Something!

I’ve been around computers all my life (OK, since 1977 but that’s almost all my life) and was lucky to get my first computer in 1983.

Over the summer of 1984, I was deeply embroiled in (up to that point) the largest programming project of my life, coding Z80 ASM on my trusty CP/M computer when I encountered the most dreaded of all BDOS errors, “BDOS ERROR ON B: BAD SECTOR”

In its most mild form, this cryptic message simply means “copy this data to another disk before this one fails.” However, in this specific instance, it represented the most severe case… “this disk is toast, kaputt, finito, your data is GONE!!!”

Via the School of Hard Knocks, I learned the value of keeping proper backups that day.
If you’ve been in this game for longer than about 10 milliseconds, it’s probable that you’ve experienced data loss in one form or another. Over the years, I’ve seen just about every kind of data loss imaginable, from the 1980’s accountant who tacked her data floppy to the filing cabinet with a magnet so she wouldn’t misplace it-- all the way to enterprise/mainframe class SAN equipment that pulverizes terabytes of critical data in less than a heartbeat due to operator error on the part of a contractor.

I’ve consulted with thousands of individuals and companies about their backup implementations and strategies, and am no longer surprised by administrators who believe they have a foolproof backup utilizing a secondary hard disk in their systems. I have witnessed disk controller failures which corrupt the contents of all attached disk drives, operator error and/or forgetfulness that leave gaping holes in so-called backup strategies and other random disasters. On the other side of the coin, I have personally experienced tragic media failure from “traditional backups” utilizing removable media such as tapes and/or CD/DVD/etc.

Your data is your life. I’ve waited up until this point to mention this, because it should be painfully obvious to every administrator, but in my experience the mentality is along the lines of “My data exists, therefore it is safe.” What happens when your data ceases to exist, and you become aware of the flaws in your backup plan? I’ll tell you – you go bankrupt, you go out of business, you get sued, you lose your job, you go homeless, and so-on. Sure, maybe those things won’t happen to you, but is your livelihood worth the gamble?

“But Justin… my data is safe because it’s stored on a RAID mirror!” I disagree. Your data is AVAILABLE, your data is FAULT TOLERANT, but it is not SAFE. RAID controllers fail. Disaster happens. Disgruntled or improperly trained personnel type ‘rm –rf /’ or accidentally select the wrong physical device when working with the Disk Manager in Windows. Mistakes happen. The unforeseeable, unavoidable, unthinkable happens.

Safe data is geographically diverse data. Safe data is up-to-date data. Safe data is readily retrievable data. Safe data is more than a single point-in-time instance.

Unsafe data is “all your eggs in one basket.” Unsafe data is “I’ll get around to doing that backup tomorrow.” Unsafe data is “I stored the backups at my house which is also underwater now.” Unsafe data is “I only have yesterday’s backup and last week’s backup, and this data disappeared two days ago.”

SoftLayer’s customers are privileged to have the option to build a truly safe data backup strategy by employing the Evault option on StorageLayer. This solution provides instantaneous off-site backups and efficiently utilizes tight compression and block-level delta technologies, is fully automated, has an extremely flexible retention policy system permitting multiple tiers of recovery points-in-time, is always online via our very sophisticated private network for speedy recovery, and most importantly—is incredibly economical for the value it provides. To really pour on the industry-speak acronym soup, it gives the customer the tools for their BCP to provide a DR scenario with the fastest RTO with the best RPO that any CAB would approve because of its obvious TCR (Total Cost of Recovery). Ok, so I made that last one up… but if you don’t recover from data loss, what does it cost you?

On my personal server, I utilize this offering to protect more than 22 GB of data. It backs up my entire server daily, keeping no less than seven daily copies representing at least one week of data. It backs up my databases hourly, keeping no less than 72 hourly copies representing at least three days of data. It does all this seamlessly, in the background, and emails me when it is successful or if there is an issue.

Most importantly, it keeps my data safe in Seattle, while my server is located in Dallas. Alternatively, if my server were located in Seattle, I could choose for my data to be stored in Dallas or our new Washington DC facility. Here’s the kicker, though. It provides me the ability to have this level of protection, with all the bells and whistles mentioned above, without overstepping the boundary of my 10 GB service. That’s right, I have 72 copies of my database and 7 copies of my server, of which the original data totals in excess of 22 GB, stored within 10 GB on the backup server.

That’s more than sufficient for my needs, but I could retain weekly data or monthly data without significant increase in storage requirements, due to the nature of my dataset.
This service costs a mere $20/mo, or $240/yr. How much would you expect to pay to be able to sleep at night, knowing your data is safe?

Are you missing something? Wait … Backup!

-Justin

Subscribe to Author Archive: %