Posts Tagged 'Power'

October 8, 2014

An Insider’s Look at Our Data Centers

I’ve been with Softlayer over four years now. It’s been a journey that has taken me around the world—from Dallas to Singapore to Washington D.C, and back again. Along the way, I’ve met amazingly brilliant people who have helped me sharpen the tools in my ‘data center toolbox’ thus allowing me to enhance the customer experience by aiding and assisting in a complex compute environment.

I like to think of our data centers as masterpieces of elegant design. We currently have 14 of these works of art, with many more on the way. Here’s an insider’s look at the design:

Keeping It Cool
Our POD layouts have a raised floor system. The air conditioning units chill from the front bottom of the servers on the ‘cold rows’ passing through the servers on the ‘warm rows.’ The warm rows have ceiling vents to rapidly clear the warm air from the backs of the servers.

Jackets are recommended for this arctic environment.

Pumping up the POWER
Nothing is as important to us as keeping the lights on. Every data center has a three-tiered approach to keeping your servers and services on. Our first tier being street power. Each rack has two power strips to distribute the load and offer true redundancy for redundant servers and switches with the remote ability to power down an individual port on either power strip.

The second tier is our batter backup for each POD. This offers emergency response for seamless failover when street power is no more.

This leads to the third step in our model, generators. We have generators in place for a sustainable continuity of power until street power has returned. Check out the 2-megawatt diesel generator installation at the DAL05 data center here.

The Ultimate Social Network
Neither power nor cooling matter if you can’t connect to your server, which is where our proprietary networking topography comes to play. Each bare metal server and each virtual server resides in a rack that connects to three switches. Each of those switches connects to an aggregate switch for a row. The aggregate switch connects to a router.

The first switch, our private backend network, allows for SSL and VPN connectivity to manage your server. It also gives you the ability to have server-to-server communication without the bounds of bandwidth overages.

The second switch, our public network, provides pubic Internet access to your device, which is perfect for shopping, gaming, coding, or whatever you want to use it for. With 20TB of bandwidth coming standard for this network, the possibilities are endless.

The third and final switch, management, allows you to connect to the Intelligent Platform Management Interface that provides tools such as KVM/hardware monitoring/and even virtual CDs to install an image of your choosing! The cables to your devices from the switches are color-coded, port-number-to-rack-unit labeled, and masterfully arranged to maximize identification and airflow.

A Soft Place for Hardware
The heart and soul of our business is the computing hardware. We use enterprise grade hardware from the ground up. We offer our smallest offering of 1 core, 1GB RAM, 25GB HDD virtual servers, to one of our largest quad 10-core, 512GB RAM, multi 4TB HDD bare metal servers. With excellent hardware comes excellent options. There is almost always a path to improvement. Meaning, unless you already have the top of the line, you can always add more. Whether it be additional drive, RAM, or even processor.

I hope you enjoyed the view from the inside. If you want to see the data centers up close and personal, I am sorry to say, those are closed to the public. But you can take a virtual tour of some of our data centers via YouTube: AMS01 and DAL05

-Joshua Fox

February 15, 2013

Cedexis: SoftLayer "Master Model Builder"

Think of the many components of our cloud infrastrucutre as analogous to LEGO bricks. If our overarching vision is to help customers "Build the Future," then our products are "building blocks" that can be purposed and repurposed to create scalable, high-performance architecture. Like LEGO bricks, each of our components is compatible with every other component in our catalog, so our customers are essentially showing off their Master Model Builder skills as they incorporate unique combinations of infrastructure and API functionality into their own product offerings. Cedexis has proven to be one of those SoftLayer "Master Model Builders."

As you might remember from their Technology Partner Marketplace feature, Cedexis offers a content and application delivery system that helps users balance traffic based on availability, performance and cost. They've recently posted a blog about how they integrated the SoftLayer API into their system to detect an unresponsive server (disabled network interface), divert traffic at the DNS routing level and return it as soon as the server became available again (re-enabled the network interface) ... all through the automation of their Openmix service:

They've taken the building blocks of SoftLayer infrastructure and API connectivity to create a feature-rich platform that improves the uptime and performance for sites and applications using Openmix. Beyond the traffic shaping around unreachable servers, Cedexis also incorporated the ability to move traffic between servers based on the amount of bandwidth you have remaining in a given month or based on the response times it sees between servers in different data centers. You can even make load balancing decisions based on SoftLayer's server management data with Fusion — one of their newest products.

The tools and access Cedexis uses to power these Openmix features are available to all of our customers via the SoftLayer API, and if you've ever wondered how to combine our blocks into your environment in unique, dynamic and useful ways, Cedexis gives a perfect example. In the Product Development group, we love to see these kinds of implementations, so if you're using SoftLayer in an innovative way, don't keep it a secret!

-Bryce

February 8, 2013

Data Center Power-Up: Installing a 2-Megawatt Generator

When I was a kid, my living room often served as a "job site" where I managed a fleet of construction vehicles. Scaled-down versions of cranes, dump trucks, bulldozers and tractor-trailers littered the floor, and I oversaw the construction (and subsequent destruction) of some pretty monumental projects. Fast-forward a few years (or decades), and not much has changed except that the "heavy machinery" has gotten a lot heavier, and I'm a lot less inclined to "destruct." As SoftLayer's vice president of facilities, part of my job is to coordinate the early logistics of our data center expansions, and as it turns out, that responsibility often involves overseeing some of the big rigs that my parents tripped over in my youth.

The video below documents the installation of a new Cummins two-megawatt diesel generator for a pod in our DAL05 data center. You see the crane prepare for the work by installing counter-balance weights, and work starts with the team placing a utility transformer on its pad outside our generator yard. A truck pulls up with the generator base in tow, and you watch the base get positioned and lowered into place. The base looks so large because it also serves as the generator's 4,000 gallon "belly" fuel tank. After the base is installed, the generator is trucked in, and it is delicately picked up, moved, lined up and lowered onto its base. The last step you see is the generator housing being installed over the generator to protect it from the elements. At this point, the actual "installation" is far from over — we need to hook everything up and test it — but those steps don't involve the nostalgia-inducing heavy machinery you probably came to this post to see:

When we talk about the "megawatt" capacity of a generator, we're talking about the bandwidth of power available for use when the generator is operating at full capacity. One megawatt is one million watts, so a two-megawatts generator could power 20,000 100-watt light bulbs at the same time. This power can be sustained for as long as the generator has fuel, and we have service level agreements to keep us at the front of the line to get more fuel when we need it. Here are a few other interesting use-cases that could be powered by a two-megawatt generator:

  • 1,000 Average Homes During Mild Weather
  • 400 Homes During Extreme Weather
  • 20 Fast Food Restaurants
  • 3 Large Retail Stores
  • 2.5 Grocery Stores
  • A SoftLayer Data Center Pod Full of Servers (Most Important Example!)

Every SoftLayer facility has an n+1 power architecture. If we need three generators to provide power for three data center pods in one location, we'll install four. This additional capacity allows us to balance the load on generators when they're in use, and we can take individual generators offline for maintenance without jeopardizing our ability to support the power load for all of the facility's data center pods.

Those of you who are in the fondly remember Tonka trucks and CAT crane toys are the true target audience for this post, but even if you weren't big into construction toys when you were growing up, you'll probably still appreciate the work we put into safeguarding our facilities from a power perspective. You don't often see the "outside the data center" work that goes into putting a new SoftLayer data center pod online, so I thought it'd give you a glimpse. Are there an topics from an operations or facilities perspectives that you also want to see?

-Robert

July 27, 2012

SoftLayer 'Cribs' ≡ DAL05 Data Center Tour

The highlight of any customer visit to a SoftLayer office is always the data center tour. The infrastructure in our data centers is the hardware platform on which many of our customers build and run their entire businesses, so it's not surprising that they'd want a first-hand look at what's happening inside the DC. Without exception, visitors to a SoftLayer data center pod are impressed when they walk out of a SoftLayer data center pod ... even if they've been in dozens of similar facilities in the past.

What about the customers who aren't able to visit us, though? We can post pictures, share stats, describe our architecture and show you diagrams of our facilities, but those mediums can't replace the experience of an actual data center tour. In the interest of bridging the "data center tour" gap for customers who might not be able to visit SoftLayer in person (or who want to show off their infrastructure), we decided to record a video data center tour.

If you've seen "professional" video data center tours in the past, you're probably positioning a pillow on top of your keyboard right now to protect your face if you fall asleep from boredom when you hear another baritone narrator voiceover and see CAD mock-ups of another "enterprise class" facility. Don't worry ... That's not how we roll:

Josh Daley — whose role as site manager of DAL05 made him the ideal tour guide — did a fantastic job, and I'm looking forward to feedback from our customers about whether this data center tour style is helpful and/or entertaining.

If you want to see more videos like this one, "Like" it, leave comments with ideas and questions, and share it wherever you share things (Facebook, Twitter, your refrigerator, etc.).

-@khazard

April 17, 2012

High Performance Computing for Everyone

This guest blog was submitted by Sumit Gupta, senior director of NVIDIA's Tesla High Performance Computing business.

The demand for greater levels of computational performance remains insatiable in the high performance computing (HPC) and technical computing industries, as researchers, geophysicists, biochemists, and financial quants continue to seek out and solve the world's most challenging computational problems.

However, access to high-powered HPC systems has been a constant problem. Researchers must compete for supercomputing time at popular open labs like Oak Ridge National Labs in Tennessee. And, small and medium-size businesses, even large companies, cannot afford to constantly build out larger computing infrastructures for their engineers.

Imagine the new discoveries that could happen if every researcher had access to an HPC system. Imagine how dramatically the quality and durability of products would improve if every engineer could simulate product designs 20, 50 or 100 more times.

This is where NVIDIA and SoftLayer come in. Together, we are bringing accessible and affordable HPC computing to a much broader universe of researchers, engineers and software developers from around the world.

GPUs: Accelerating Research

High-performance NVIDIA Tesla GPUs (graphics processing units) are quickly becoming the go-to solution for HPC users because of their ability to accelerate all types of commercial and scientific applications.

From the Beijing to Silicon Valley — and just about everywhere in between — GPUs are enabling breakthroughs and discoveries in biology, chemistry, genomics, geophysics, data analytics, finance, and many other fields. They are also driving computationally intensive applications, like data mining and numerical analysis, to much higher levels of performance — as much as 100x faster.

The GPU's "secret sauce" is its unique ability to provide power-efficient HPC performance while working in conjunction with a system's CPU. With this "hybrid architecture" approach, each processor is free to do what it does best: GPUs accelerate the parallel research application work, while CPUs process the sequential work.

The result is an often dramatic increase in application performance.

SoftLayer: Affordable, On-demand HPC for the Masses

Now, we're coupling GPUs with easy, real-time access to computing resources that don't break the bank. SoftLayer has created exactly that with a new GPU-accelerated hosted HPC solution. The service uses the same technology that powers some of the world's fastest HPC systems, including dual-processor Intel E5-2600 (Sandy Bridge) based servers with one or two NVIDIA Tesla M2090 GPUs:

NVIDIA Tesla

SoftLayer also offers an on-demand, consumption-based billing model that allows users to access HPC resources when and how they need to. And, because SoftLayer is managing the systems, users can keep their own IT costs in check.

You can get more system details and pricing information here: SoftLayer HPC Servers

I'm thrilled that we are able to bring the value of hybrid HPC computing to larger numbers of users. And, I can't wait to see the amazing engineering and scientific advances they'll achieve.

-Sumit Gupta, NVIDIA - Tesla

January 26, 2012

Up Close and Personal: Intel Xeon E7-4850

Last year, we announced that we would be the first provider to offer the Intel E7-4800 series server. This bad boy has record-breaking compute power, tons of room for RAM and some pretty amazing performance numbers, and as of right now, it's one of the most powerful servers on the market.

Reading about the server and seeing it at the bottom of the "Quad Processor Multi-core Servers" list on our dedicated servers page is pretty interesting, but the real geeks want to see the nuts and bolts that make up such an amazing machine. I took a stroll down to the inventory room in our DAL05 data center in hopes that they had one of our E7-4850s available for a quick photo shoot to share with customers, and I was in luck.

The only way to truly admire a server is to put it through its paces in production, but getting to see a few pictures of the server might be a distance second.

Intel Xeon E7-4850

When you see the 2U face of the server in a rack, it's a little unassuming. You can load it up with six of our 3TB SATA hard drives for a total of 18TB of storage if you're looking for a ton of space, and if you're focused on phenomenal disk IO to go along with your unbelievable compute power, you can opt for SSDs. If you still need more space,can order a 4U version fill ten drive bays!

Intel Xeon E7-4850

The real stars of the show when it comes to the E7-4850 server are nestled right underneath these heatsinks. Each of the four processors has TEN cores @ 2.00GHz, so in this single box, you have a total of forty cores! I'm not sure how Moore's Law is going to keep up if this is the next step to jump from.

Intel Xeon E7-4850

With the abundance of CPU power, you'll probably want an abundance of RAM. Not coincidentally, we can install up to 512GB of RAM in this baby. It's pretty unbelievable to read the specs available in the decked-out version of this server, and it's even crazier to think that our servers going to get more and more powerful.

Intel Xeon E7-4850

With all of the processing power and RAM in this box, the case fans had to get a bit of an upgrade as well. To keep enough air circulating through the server, these three case fans pull air from the cold aisle in our data center, cool the running components and exhaust the air into the data center's "hot aisle."

Intel Xeon E7-4850

Because this machine could be used to find the last digit of pi or crunch numbers to find the cure for cancer, it's important to have redundancy ... In the picture above, you see the redundant power supplies that safeguard against a single point of failure when it comes to server power. In each of our data centers, we have N+1 power redundancy, so adding N+1 power redundancy into the server isn't very redundant at all ... It's almost expected!

If your next project requires a ton of processing power, a lot of room for RAM, and redundant power, this server is up for the challenge! Configure your own quad-proc ten-core beast of a machine in our shopping cart or contact our SLales team for a customized quote on one: sales@softlayer.com

When you get done benchmarking it against your old infrastructure, let us know what you think!

-Summer

August 25, 2011

The Beauty of IPMI

Nowadays, it would be extremely difficult to find a household that does not store some form of media – whether it be movies, music, photos or documents – on their home computer. Understanding that, I can say with confidence that many of you have been away from home and suddenly had the desire (or need) to access the media for one reason or another.

Because the Internet has made content so much more accessible, it's usually easy to log in remotely to your home PC using something like Remote Desktop, but what if your home computer is not powered on? You hope a family member is at home to turn on the computer when you call, but what if everyone is out of the house? Most people like me in the past would have just given up altogether since there would be no clear and immediate solution. Leaving your computer on all day could work, but what if you're on an extended trip and you don't want to run up your electricity bill? I'd probably start traveling with some portable storage device like a flash drive or portable hard drive to avoid the problem. This inelegant solution requires that I not forget the device, and the storage media would have to be large enough to contain all necessary files (and I'd also have to know ahead of time which ones I might need).

Given these alternatives, I usually found myself hoping for the best with the portable device, and as anticipated, there would still be some occasions where I didn't happen to have the right files with me on that drive. When I started working for SoftLayer, I was introduced to a mind-blowing technology called IPMI, and my digital life has never been the same.

IPMI – Intelligent Platform Management Interface – is a standardized system interface that allows system administrators to manage and monitor a computer. Though this may be more than what the common person needs, I immediately found IPMI to be incredible because it allows a person to remotely power on any computer with that interface. I was ecstatic to realize that for my next computer build, I could pick a motherboard that has this feature to achieve total control over my home computer for whatever I needed. IPMI may be standard for all servers at SoftLayer, but that doesn't mean it's not a luxury feature.

If you've ever had the need to power on your computers and/or access the computer's BIOS remotely, I highly suggest you look into IPMI. As I learned more and more about the IPMI technology, I've seen how it can be a critical feature for business purposes, so the fact that it's a standard at SoftLayer would suggest that we've got our eye out for state-of-the art technologies that make life easier for our customers.

Now I don't have to remember where I put that flash drive!

-Danny

July 25, 2011

Under the Hood of 'The Cloud'

When we designed the CloudLayer Computing platform, our goal was to create an offering where customers would be able to customize and build cloud computing instances that specifically meet their needs: If you go to our site, you're even presented with an opportunity to "Build Your Own Cloud." The idea was to let users choose where they wanted their instance to reside as well as their own perfect mix of processor power, RAM and storage. Today, we're taking the BYOC mantra one step farther by unveiling the local disk storage option for CloudLayer computing instances!

Local Disk

For those of you familiar with the CloudLayer platform, you might already understand the value of a local disk storage option, but for the uninitiated, this news presents a perfect opportunity to talk about the dynamics of the cloud and how we approach the cloud around here.

As the resident "tech guy" in my social circle, I often find myself helping friends and family understand everything from why their printer isn't working to what value they can get from the latest and greatest buzzed-about technology. As you'd probably guess, the majority of the questions I've been getting recently revolve around 'the cloud' (thanks especially to huge marketing campaigns out of Redmond and Cupertino). That abstract term effectively conveys the intentional sentiment that users shouldn't have to worry about the mechanics of how the cloud works ... just that it works. The problem is that as the world of technology has pursued that sentiment, the generalization of the cloud has abstracted it to the point where this is how large companies are depicting the cloud:

Cloud

As it turns out, that image doesn't exactly illicit the, "Aha! Now I get it!" epiphany of users actually understanding how clouds (in the technology sense) work. See how I pluralized "clouds" in that last sentence? 'The Cloud' at SoftLayer isn't the same as 'The Cloud' in Redmond or 'The Cloud' in Cupertino. They may all be similar in the sense that each cloud technology incorporates hardware abstraction, on-demand scalability and utility billing, but they're not created in the same way.

If only there were a cloud-specific Declaration of Independence ...

We hold these truths to be self-evident, that all clouds are not equal, that they are endowed by their creators with certain distinct characteristics, that among these are storage, processing power and the ability to serve content. That to secure these characteristics, information should be given to users, expressed clearly to meet the the cloud's users;

The Ability to Serve Content
Let's unpack that Jeffersonian statement a little by looking at the distinct characteristics of every cloud, starting with the third ("the ability to serve content") and working backwards. Every cloud lives on hardware. The extent to which a given cloud relies on that hardware can vary, but at the end of the day, you &nash; as a user – are not simply connecting to water droplets in the ether. I'll use SoftLayer's CloudLayer platform as a specific example of that a cloud actually looks like: We have racks of uniform servers – designated as part of our cloud infrastructure – installed in rows in our data centers. All of those servers are networked together, and we worked with our friends at Citrix to use the XenServer platform to tie all of those servers together and virtualize the resources (or more simply: to make each piece of hardware accessible independently of the rest of the physical server it might be built into). With that infrastructure as a foundation, ordering a cloud server on the CloudLayer platform simply involves reserving a small piece of that cloud where you can install your own operating system and manage it like an independent server or instance to serve your content.

Processing Power
Understanding the hardware architecture upon which a cloud is built, the second distinct characteristic of every cloud ("processing power") is fairly logical: The more powerful the hardware used for a given cloud, the better processing performance you'll get in an instance using a piece of that hardware.

You can argue about what software uses the least resources in the process of virtualizing, but apples-to-apples, processing power is going to be determined by the power of the underlying hardware. Some providers try to obfuscate the types of servers/processors available to their cloud users (sometimes because they are using legacy hardware that they wouldn't be able to sell/rent otherwise), but because we know how important consistent power is to users, we guarantee that CloudLayer instances are based on 2.0GHz (or faster) processors.

Storage
We walked backward through the distinct characteristics included in my cloud-specific Declaration of Independence because of today's CloudLayer Computing storage announcement, but before I get into the details of that new option, let's talk about storage in general.

If the primary goal of a cloud platform is to give users the ability to scale instantly from 1 CPU of power to 16 CPUs of power, the underlying architecture has to be as flexible as possible. Let's say your cloud computing instance resides on a server with only 10 CPUs available, so when you upgrade to a 16-CPU instance, your instance will be moved to a server with enough available resources to meet your need. To make that kind of quick change possible, most cloud platforms are connected to a SAN (storage area network) or other storage device via a back-end network to the cloud servers. The biggest pro of having this setup is that upgrading and downgrading CPU and RAM for a given cloud instance is relatively easy, but it introduces a challenge: The data lives on another device that is connected via switches and cables and is being used by other customers as well. Because your data has to be moved to your server to be processed when you call it, it's a little slower than if a hard disk was sitting in the same server as the instance's processor and RAM. For that reason, many users don't feel comfortable moving to the cloud.

In response to the call for better-performing storage, there has been a push toward incorporating local disk storage for cloud computing instances. Because local disk storage is physically available to the CPU and RAM, the transfer of data is almost immediate and I/O (input/output) rates are generally much higher. The obvious benefit of this setup is that the storage will perform much better for I/O-intensive applications, while the tradeoff is that the setup loses the inherent redundancy of having the data replicated across multiple drives in a SAN (which, is almost like its own cloud ... but I won't confuse you with that right now).

The CloudLayer Computing platform has always been built to take advantage of the immediate scalability enabled by storing files in a network storage device. We heard from users who want to use the cloud for other applications that they wanted us to incorporate another option, so today we're happy to announce the availability of local disk storage for CloudLayer Computing! We're looking forward to seeing how our customers are going to incorporate cloud computing instances with local disk storage into their existing environments with dedicated servers and cloud computing instances using SAN storage.

If you have questions about whether the SAN or local disk storage option would fit your application best, click the Live Chat icon on SoftLayer.com and consult with one of our sales reps about the benefits and trade-offs of each.

We want you to know exactly what you're getting from SoftLayer, so we try to be as transparent as we can when rolling out new products. If you have any questions about CloudLayer or any of our other offerings, please let us know!

-@nday91

February 21, 2011

Building a Data Center | Part 1: Follow the Flow

The electrical distribution system in a data center is an important concept that many IT professionals overlook. Understanding the basics of your electrical distribution system can save downtime and aid in troubleshooting power problems in your cabinets. It's easy to understand if you follow the flow.

As with many introductory lessons in electricity, I will use the analogy of a flowing river to help describe the flow of electricity in a data center. The river is akin to wires, the amount of water is the voltage and the speed the water moves is the current flow also known as amps. So, when looking at an electrical system, think about a flowing river and the paths that it must take to get to and from its source to the ocean.

External Power Sources
The preferred source of electrical power is delivered to a data center by the local utility company. Once that utility power enters the building, its first stop is usually going to be the ATS or Automatic Transfer Switch. This electro-mechanical device is fed power from two or more sources – a Primary and an Emergency. While the primary source is available, it sits happily and flows power to a series of distribution breakers, often called "switch gear." These large breakers are designed to carry hundreds or thousands of amps and pass that power to your uninterruptible power supply (UPS) units and other facility infrastructure: lighting, HVAC, fire life safety systems, etc.

If the Primary source becomes unavailable, the ATS triggers the emergency source. In our data center example, that means our on-site generators start up. It typically takes between 9 to 12 seconds for the generators to come up to speed to allow for full power generation. Once the ATS sees that the generators have started and are ready to supply power, it will switch the load from the primary source to the emergency source. This is called an open transition because the load is removed from the primary source during the switch to the emergency source.

UPS Units
Once the power leaves the ATS and switch gear, it is no longer important to know whether you are connected to the primary or emergency sources. The next step in the power flow is the UPS. Like a dam, the UPS system takes an untamed river and transforms it into something safe and usable: An uninterruptable source of power to your server cabinet.

This is achieved by a bank of batteries sized to support the IT load. The batteries are connected in-line with the supply and load, so while the ATS senses a utility outage and starts the emergency generators, the IT load is still supplied power. A typical UPS battery system is designed to support the IT load for a maximum of 10 minutes.

Another benefit of the UPS system is the ability to clean the incoming utility power. Normal utility power voltages vary wildly depending on what other loads the service is supplying. These voltage fluctuations are detrimental to power supplies in servers and can shorten their life spans or worse: destroy them. This is why most home computers have a surge suppressor to prevent power spikes from damaging your equipment. UPS units clean electrical power by converting utility power from AC to DC and back to AC again:

UPS

Power Distribution Units
After protecting and cleaning the power, the UPS power will flow to a group of power distribution units (PDUs). At this point, the voltage will normally be 480vac which is too high for most IT equipment. The PDU or a separate transformer has to to convert the 480 volts to a more usable voltage like 120vac or 277vac. Once the voltage is converted, the power is then distributed to electrical outlets via a common electrical breaker.

PDU technology has advanced, like all data center equipment, from simple breaker panels to complex devices capable of measuring IT loads, load balancing, alarm and fault monitoring and even automatic switching between two power sources instantly during an outage.

Power Strip
The final piece of equipment in the data center electrical system before your server is a power strip. Power strips are often mistakenly referred to as PDUs. The power strip is mounted in a cabinet and contains multiple electrical outlets, not electrical breakers. You plug the server power cord into the power strip, not the PDU. And from here, the flow of electricity finally reaches the sea of servers.

Here's a basic for a data center electrical distribution system:

Simplified Data Center Power Architecture

Our data centers are complex, and the entire building infrastructure is critical to its continuous operation. The electrical distribution system is at the heart of any critical facility, and it's vital that everyone working in and around critical sites knows at least the basics of your electrical distribution system.

In Part 2 of our "Building a Data Center" series, we'll cover how we keep the facility cool.

-John

December 9, 2010

Records Are Made to be Broken

You know how it works – a casual conversation leads to a Google search the next day. This in turn leads to enlightenment. Or something along those lines.

Last Tuesday morning, a PDF version of the January 30, 1983(!) issue of ‘Arcade Express – The Bi-weekly Electronic Games Newsletter’ arrived in my inbox. It made for good reading and brought me back to the days of my youth when I burned numerous hours and brain cells playing Intellivision, Atari and Commodore machines. I had access to two devices – one that sat in my family room (an Intellivision) and one that sat in a pal’s basement (an Atari 2600). My kids have access to much more – there are numerous devices at their fingertips; including a PS3, Nintendo DS, a MAC mini and my wife’s iPhone. Most of their friends are in similar circumstances.

A quick comparison is in order:

Device RAM Processor
Vic 20 5 KB 1.1 MHz
Intellivision 11 KB 894 KHz
Atari 2600 .125 KB 1.19 MHz
Nintendo DS 4 MB Two ARM Processors:
67 MHz and 33 MHz
PS3 256 MB DRAM
156 MB Video
Seven cores @3.2 GHZ
iPhone 3GS 256 MB eDRAM 600 MHz
MAC Mini 2 GB Two cores @1.66 GHz

Processing power aside, I think that the more important thing to consider is the fact that we are approaching ubiquity for a number of devices in North America. Most people have access to the internet, most people have access to mobile phones (and more and more of them have access to smartphone like the iPhone or an Android device) and most people have access to a dedicated game device. Western Europe and parts of Asia (Japan and Korea) are the same and the rest of Asia is soon to follow, and will be the beneficiary of the tremendous innovation that is happening today. There is a lot of room for growth and maybe not a whole lot of clarity around what that next generation of devices and games will look like (I predict 3D, AI driven games played with a dedicated gaming chip implanted in your cortex).

The last page of the ‘Arcade Express’ newsletter detailed the honor roll of ‘The Nation’s Highest Scores’. Softlayer’s own Jeff Reinis was the top Arcade Game player for Pac-Man. His record was 15,676,420. I wonder how many hours of continuous game playing that is?

-@quigleymar

Subscribe to power