Posts Tagged 'Engineering'

December 6, 2012

MongoDB: Architectural Best Practices

With the launch of our MongoDB solutions, developers can provision powerful, optimized, horizontally scaling NoSQL database clusters in real-time on bare metal infrastructure in SoftLayer data centers around the world. We worked tirelessly with our friends at 10gen — the creators of MongoDB — to build and tweak hardware and software configurations that enable peak MongoDB performance, and the resulting platform is pretty amazing. As Duke mentioned in his blog post, those efforts followed 10Gen's MongoDB best practices, but what he didn't mention was that we created some architectural best practices of our own for MongoDB in deployments on our platform.

The MongoDB engineered servers that you order from SoftLayer already implement several of the recommendations you'll see below, and I'll note which have been incorporated as we go through them. Given the scope of the topic, it's probably easiest to break down this guide into a few sections to make it a little more digestible. Let's take a look at the architectural best practices of running MongoDB through the phases of the roll-out process: Selecting a deployment strategy to prepare for your MongoDB installation, the installation itself, and the operational considerations of running it in production.

Deployment Strategy

When planning your MongoDB deployment, you should follow Sun Tzu's (modified) advice: "If you know the [friend] and know yourself, you need not fear the result of a hundred battles." "Friend" was substituted for the "enemy" in this advice because the other party is MongoDB. If you aren't familiar with MongoDB, the top of your to-do list should be to read MongoDB's official documentation. That information will give you the background you'll need as you build and use your database. When you feel comfortable with what MongoDB is all about, it's time to "know yourself."

Your most important consideration will be the current and anticipated sizes of your data set. Understanding the volume of data you'll need to accommodate will be the primary driver for your choice of individual physical nodes as well as your sharding plans. Once you've established an expected size of your data set, you need to consider the importance of your data and how tolerant you are of the possibility of lost or lagging data (especially in replicated scenarios). With this information in hand, you can plan and start testing your deployment strategy.

It sounds a little strange to hear that you should test a deployment strategy, but when it comes to big data, you want to make sure your databases start with a strong foundation. You should perform load testing scenarios on a potential deployment strategy to confirm that a given architecture will meet your needs, and there are a few specific areas that you should consider:

Memory Sizing
MongoDB (like many data-oriented applications) works best when the data set can reside in memory. Nothing performs better than a MongoDB instance that does not require disk I/O. Whenever possible, select a platform that has more available RAM than your working data set size. If your data set exceeds the available RAM for a single node, then consider using sharding to increase the amount of available RAM in a cluster to accommodate the larger data set. This will maximize the overall performance of your deployment. If you notice page faults when you put your database under production load, they may indicate that you are exceeding the available RAM in your deployment.

Disk Type
If speed is not your primary concern or if you have a data set that is far larger than any available in memory strategy can support, selecting the proper disk type for your deployment is important. IOPS will be key in selecting your disk type and obviously the higher the IOPS the better the performance of MongoDB. Local disks should be used whenever possible (as network storage can cause high latency and poor performance for your deployment). It's also advised that you use RAID 10 when creating disk arrays.

To give you an idea of what kind of IOPS to expect from a given type of drive, these are the approximate ranges of IOPS per drive in SoftLayer MongoDB engineered servers:

SATA II – 100-200 IOPS
15K SAS – 300-400 IOPS
SSD – 7,000-8,000 IOPS (read) 19,000-20,000 IOPS (write)

CPU
Clock speed and the amount of available processors becomes a consideration if you anticipate using MapReduce. It has also been noted that when running a MongoDB instance with the majority of the data in memory, clock speed can have a major impact on overall performance. If you are planning to use MapReduce or you're able to operate with a majority of your data in memory, consider a deployment strategy that includes a CPU with a high clock/bus speed to maximize your operations per second.

Replication
Replication provides high availability of your data if a node fails in your cluster. It should be standard to replicate with at least three nodes in any MongoDB deployment. The most common configuration for replication with three nodes is a 2x1 deployment — having two primary nodes in a single data center with a backup server in a secondary data center:

MongoDB Replication

Sharding
If you anticipate a large, active data set, you should deploy a sharded MongoDB deployment. Sharding allows you to partition a single data set across multiple nodes. You can allow MongoDB to automatically distribute the data across nodes in the cluster or you may elect to define a shard key and create range-based sharding for that key.

Sharding may also help write performance, so you can also elect to shard even if your data set is small but requires a high amount of updates or inserts. It's important to note that when you deploy a sharded set, MongoDB will require three (and only three) config server instances which are specialized Mongo runtimes to track the current shard configuration. Loss of one of these nodes will cause the cluster to go into a read-only mode (for the configuration only) and will require that all nodes be brought back online before any configuration changes can be made.

Write Safety Mode
There are several write safety modes that govern how MongoDB will handle the persistence of the data to disk. It is important to consider which mode best fits your needs for both data integrity and performance. The following write safety modes are available:

None – This mode provides a deferred writing strategy that is non-blocking. This will allow for high performance, however there is a small opportunity in the case of a node failing that data can be lost. There is also the possibility that data written to one node in a cluster will not be immediately available on all nodes in that cluster for read consistency. The 'None' strategy will also not provide any sort of protection in the case of network failures. That lack of protection makes this mode highly unreliable and should only be used when performance is a priority and data integrity is not a concern.

Normal – This is the default for MongoDB if you do not select any other mode. It provides a deferred writing strategy that is non-blocking. This will allow for high performance, however there is a small opportunity in the case of a node failing that data can be lost. There is also the possibility that data written to one node in a cluster will not be immediately available on all nodes in that cluster for read consistency.

Safe – This mode will block until MongoDB has acknowledged that it has received the write request but will not block until the write is actually performed. This provides a better level of data integrity and will ensure that read consistency is achieved within a cluster.

Journal Safe – Journals provide a recovery option for MongoDB. Using this mode will ensure that the data has been acknowledged and a Journal update has been performed before returning.

Fsync - This mode provides the highest level of data integrity and blocks until a physical write of the data has occurred. This comes with a degradation in performance and should be used only if data integrity is the primary concern for your application.

Testing the Deployment
Once you've determined your deployment strategy, test it with a data set similar to your production data. 10gen has several tools to help you with load testing your deployment, and the console has a tool named 'benchrun' which can execute operations from within a JavaScript test harness. These tools will return operation information as well as latency numbers for each of those operations. If you require more detailed information about the MongoDB instance, consider using the mongostat command or MongoDB Monitoring Service (MMS) to monitor your deployment during the testing.

Installation

When performing the installation of MongoDB, a few considerations can help create both a stable and performance-oriented solution. 10gen recommends the use CentOS (64-bit) as the base operating system if at all possible. If you try installing MongoDB on a 32-bit operating system, you might run into file size limits that cause issues, and if you feel the urge to install it on Windows, you'll see performance issues if virtual memory begins to be utilized by the OS to make up for a lack of RAM in your deployment. As a result, 32-bit operating systems and Windows operating systems should be avoided on MongoDB servers. SoftLayer provisions CentOS 6.X 64-bit operating systems by default on all of our MongoDB engineered server deployments.

When you've got CentOS 64-bit installed, you should also make the following changes to maximize your performance (all of which are included by default on all SoftLayer engineered servers):

Set SSD Read Ahead Defaults to 16 Blocks - SSD drives have excellent seek times allowing for shrinking the Read Ahead to 16 blocks. Spinning disks might require slight buffering so these have been set to 32 blocks.

noatime - Adding the noatime option eliminates the need for the system to make writes to the file system for files which are simply being read — or in other words: Faster file access and less disk wear.

Turn NUMA Off in BIOS - Linux, NUMA and MongoDB tend not to work well together. If you are running MongoDB on NUMA hardware, we recommend turning it off (running with an interleave memory policy). If you don't, problems will manifest in strange ways like massive slow downs for periods of time or high system CPU time.

Set ulimit - We have set the ulimit to 64000 for open files and 32000 for user processes to prevent failures due to a loss of available file handles or user processes.

Use ext4 - We have selected ext4 over ext3. We found ext3 to be very slow in allocating files (or removing them). Additionally, access within large files is poor with ext3.

One last tip on installation: Make the Journal and Data volumes be distinct physical volumes. If the Journal and Data directories reside on a single physical volume, flushes to the Journal will interrupt the access of data and provide spikes of high latency within your MongoDB deployment.

Operations

Once a MongoDB deployment has been promoted to production, there are a few recommendations for monitoring and optimizing performance. You should always have the MMS agent running on all MongoDB instances to help monitor the health and performance of your deployment. Additionally, this tool is also very useful if you have 10gen MongoDB Cloud Subscriptions because it provides useful debugging data for the 10gen team during support interactions. In addition to MMS, you can use the mongostat command (mentioned in the deployment section) to see runtime information about the performance of a MongoDB node. If either of these tools flags performance issues, sharding or indexing are first-line options to resolve them:

Indexes - Indexes should be created for a MongoDB deployment if monitoring tools indicate that field based queries are performing poorly. Always use indexes when you are querying data based on distinct fields to help boost performance.

Sharding - Sharding can be leveraged when the overall performance of the node is suffering because of a large operating data set. Be sure to shard before you get in the red; the system only splits chunks for sharding on insert or update so if you wait too long to shard you may have some uneven distribution for a period of time or forever depending on your data set and sharding key strategy.

I know it seems like we've covered a lot over the course of this blog post, but this list of best practices is far from exhaustive. If you want to learn more, the MongoDB forums are a great resource to connect with the rest of the MongoDB community and learn from their experiences, and the documentation on MongoDB's site is another phenomenal resource. The best people to talk to when it comes to questions about MongoDB are the folks at 10gen, so I also highly recommend taking advantage of MongoDB Cloud Subscriptions to get their direct support for your one-off questions and issues.

-Harold

July 26, 2012

Global IP Addresses - What Are They and How Do They Work?

SoftLayer recently released "Global IPs" to a good amount of internal fanfare, and I thought I'd share a little about it with the blog audience in case customers have questions about what Global IPs are and how they work. Simply put, Global IP addresses can be provisioned in any data center on the SoftLayer network and moved to another facility if necessary. You can point it to a server in Dallas, and if you need to perform maintenance on the server in Dallas, you can move the IP address to a server in Amsterdam to seamlessly (and almost immediately) transition your traffic. If you spin up and turn down workloads on cloud computing instances, you have the ability to maintain and a specific IP address when you completely turn down an environment, and you can quickly reprovision the IP on a new instance when you spin up the next workload.

How Do Global IPs Work?

The basics of how the Internet works are simple: Packets are sent between you and a server somewhere based on the location of the content you've requested. That location is pinpointed by an IP address that is assigned to a specific server or cloud. Often for various reasons, blocks of IP addresses are provisioned in one region or location, so Global IPs are a bit of a departure from the norm.

When you're sending/receiving packets, you might thing the packets "know" the exact physical destination as soon as they're directed to an IP address, but in practice, they don't have to ... The packets are forwarded along a path of devices with a general idea of where the exact location will be, but the primary concern of each device is to get the all packets to the next hop in the network path as quickly as possible by using default routes and routing tables. As an example, let's follow a packet as it comes from an external webserver and detail how it gets back to your machine:

  1. The external webserver sends the packet to a local switch.
  2. The switch passes it to a router.
  3. The packet traverses a number of network hops (other routers) and enters the Softlayer network at one of the backbone routers (BBR).
  4. The BBR looks at the IP destination and compares it to a table shared and updated with the other routers on SoftLayer's network, and it locates the subnet the IP belongs to.
  5. The BBR determines behind which distribution aggregate router (DAR) the IP is located, then it to the closest BBR to that DAR.
  6. The DAR gets the packet, looks at its own tables, and finds the front-end customer router (FCR) that the subnet lives on, and sends it there.
  7. The FCR routes the packet to the front-end customer switch (FCS) that has that IP mapped to the proper MAC address.
  8. The switch then delivers the packet through the proper switchport.
  9. Your server gets the packet from the FCS, and the kernel goes, "Oh yes, that IP on the public port, I'll accept this now."

All of those steps happen in an instant, and for you to be reading this blog, the packets carrying this content would have followed a similar pattern to the browser on your computer.

The process is slightly different when it comes to Global IP addresses. When a packet is destined for a Global IP, as soon as it gets onto the SoftLayer network (step 4 above), the routing process changes.

We allocate subnets of IP addresses specifically to the Global IP address pool, and we tell all the BBRs that these IPs are special. When you order a global IP, we peel off one of those IPs and add a static route to your chosen server's IP address, and then tell all the BBRs that route. Rather than the server's IP being an endpoint, the network is expecting your server to act as a router, and do something with the packet when it is received. I know that could sound a little confusing since we aren't really using the server as a router, so let's follow a packet to your Global IP (following the first three steps from above):

  1. The BBR notes that this IP belongs to one of the special Global IP address subnets, and matches the destination IP with the static route to the destination server you chose when you provisioned the Global IP.
  2. The BBR forwards the packet to the DAR, which then finds the FCR, then hands it off to the switch.
  3. The switch hands the packet to your server, and your server accepts it on the public interface like a regular secondary IP.
  4. Your server then essentially "routes" the packet to an IP address on itself.

Because the Global IP address can be moved to different servers in different locations, whenever you change the destination IP, the static route is updated in our routing table quickly. Because the change is happening exclusively on SoftLayer's infrastructure, you don't have to wait on other providers propagate the change. Think of updating your site's domain to a new IP address via DNS as an example: Even after you update your authoritative DNS servers, you have to wait for your users' DNS servers to recognize and update the new IP address. With Global IPs, the IP address would remain the same, and all users will follow the new path as soon as the routers update.

This initial release of Global IP addresses is just the tip of the iceberg when it comes to functionality. The product management and network engineering teams are getting customer feedback and creating roadmaps for the future of the product, so we'd love to hear your feedback and questions. If you want a little more in-depth information about installation and provisioning, check out the Global IP Addresses page on KnowledgeLayer.

-Jason

June 30, 2011

Having a Computer Guy in the House

This SoftLayer Blog entry actually comes to us from Kate Moseley (Age 10), daughter of VP of Network Engineering Ric Moseley.

I think it is cool that my dad is a computer guy that works for SoftLayer because he is always able to fix our computers, TVs, and anything electronic. His job is to order and fix computer networks. He also likes messing with anything technical at home including iPods, iPhones, computers, TVs, etc.

My dad is always working so hard to earn money for our family. Sometimes he's so busy emailing people at work that when you ask him a question, it's like he can't even hear you. I also think that it's cool that he gets to travel to a different state almost every month it seems like. I love going to my dad's office because I get to see what it's like working in an office with so many people in such a busy place.

My dad goes to many meetings with his boss, Lance, and the rest of the staff. When he's not at his office, he's still working really hard at home! Sometimes he stays up till 4 o'clock in the morning to help fix things at his work. One time he got a call while we were on vacation saying that a router was down at the data center and he needed to come back ASAP! So he packed up his bags and headed back to Dallas! Sometimes we don't even get to sit down and have an actual meal as a family because he always misses dinner and sometimes he's on a conference call for more than 2 hours at a time.

My dad used to work at The Planet. He and 9 other people came up with the company called "SoftLayer." SoftLayer recently merged with The Planet, and now they are one big company. His company is always getting bigger, so almost every year they have to move offices to a different location. My dad loves his job because he gets to interact with one of his favorite things: Technology. SoftLayer has given my family an opportunity to do many things in life that we would not ever have had the chance to do.

Someday I hope to be a part of SoftLayer just like my dad is today.

- Kate Moseley

If you share Kate's hope to one day be a part of the SoftLayer team, visit the SoftLayer Careers page. We have more than 50 positions available in Dallas, Houston, Washington, D.C., Seattle, San Francisco and Amsterdam. As Kate explained, SoftLayer is growing like crazy, so whether your background is in Finance, Technical Support, Facilities, Human Resources, IT, Marketing, Sales or Development, we want you to join us!

April 20, 2011

3 Bars | 3 Questions: SoftLayer Managed Hosting

I know you expected to see a video interview with Paul Ford the next time a 3 Bars | 3 Questions episode rolled across your desk, but I snuck past him for a chance in the spotlight this week. Kevin and I jumped on a quick video chat to talk about the Sales Engineering team, and because of our recent release of SoftLayer Managed Hosting, two of the three questions ended up being about that news:

You should be seeing a blog from Nathan in the next half hour or so with more detail about how we approached managed hosting, so you'll have all the background you need to springboard into that post after you watch this video.

If you've heard everything you need to hear about managed hosting and want to start the process of adding it to servers on your account, visit http://www.softlayer.com/solutions/managed-hosting/ or chat with a sales rep, and they can help you get squared away. If you're not sure whether it's a good fit, ask for a sales engineer to consult ... They're a great group with a pretty awesome manager. :-)

Paul, sorry for stealing your spot in the 3 Bars | 3 Questions rotation! I'm handing the baton back over to you to talk about TechWildcatters and the Technology Partners Marketplace in the next episode.

-Tam

August 19, 2010

The Girls' Engineering Club

I remember when I got started in computing. For the morbidly curious it was officially "a long time ago" and I'm afraid that's all I'm going to say other than to note that a major source of inspiration for me was the movie TRON, or more specifically the computer graphics in that movie (naturally I'm looking forward to the release of the new TRON movie!).

Computers have come a long way since then and what they've gained in power, they've also lost in simplicity. To draw an analogy, the kids of my father's generation, who spent a lot of time in the garage tinkering with cars, would have to make a big technological leap before they could monkey with the guts of today's newfangled automobiles. In a similar fashion the computers of my era, with built in Integer BASIC and simple graphics modes, have given way to mouse-driven, fully graphical user interfaces of today. Where I started programming by entering a few lines of text at a prompt and watching my code spit out streams of text in return, these days an aspiring programmer has to create a significant chunk of code to put up a window into which they can display their results, before they can write the code that generates those results.

In short, there's a bit more of a learning curve to get started. While kids are a bit farther along when they start out, it doesn't hurt to give them a push where you can.

Several months ago, the counselor at the local elementary school called to invite my daughter to join a newly-formed Engineering Club for the girls in the fifth grade. My daughter had scored well in her math and science tests and they wanted her to be a part of a pilot program to help foster an interest in science and engineering. For various reasons (most having to do with bureaucracy) the school was unable to get the program off the ground. My wife, not wanting the girls to miss out on an opportunity, took the program off-campus and created an informal club, divorced from the school, and driven by the parents. The Girls Engineering Club was born.

The club has a dozen or so young ladies as members and since they're not tied to the school calendar, they have meet once or twice a month through the summer. In the club they explore applications of science, mathematics, and technology with a particular focus on experimentation. For example, the club formed shortly after the recent oil spill in the Gulf of Mexico. The girls spent their first meeting talking about what the professional engineers were doing at the time, and then trying to find ways to separate motor oil from water using things like sand, soap, coffee filters and dish soap. When I got home that day, I saw the aftermath. I hope the girls learned a lot... it was certainly clear that they had made a big mess and had a lot of fun.

It became my turn to help when the club took up the subject of Software Engineering. I'd like to say that the club leadership took me on because I have degrees in Computer Science and I'm a professional Software Engineer by trade. In truth, however, I think it was just my wife who thought I needed something better to do with my weekend than play video games. For whatever reason, however, I was pressed into service to teach the girls about Software Engineering.

Naturally I wanted to teach the girls a little bit about how engineering principles apply to the creation of software. But I imagine that a group of pre-teen women would find an hour and a half exposition on the subject at best half as exciting as that last sentence makes it sound. Moreover, these girls were used to hands-on engineering club meetings. If the girls didn't "get their hands dirty" with at least a little bit of programming, the meeting would be a bust. The problem was... How do you teach a dozen pre-teen girls about programming; and on a shoestring budget?

When I was taking computer science classes in school we had very expensive labs with carefully controlled software environments. For the club, each girl probably has a computer at their house, but I wasn't anxious to ask parents to pull them out of place, drag them somewhere that we could set them up, and then slog through the nightmare of trying to get a semi-uniform environment on them.

Instead, I gathered up my veritable museum of computer hardware. Using those that were only a few years old, and still capable of running Mac OS X, I pulled together three that could be wirelessly networked together and have their screens shared. It was a bit of an ad-hoc arrangement, but functional.

Next came the question of subject matter. In my daily life I work with a programming language called Objective-C. Objective-C is a really fun language, but it requires a pretty hefty tool chain to use effectively. I didn't want to burn a lot of my hour and a half with the girls teaching them about development tools... I wanted them writing code. Clearly Objective-C wasn't the answer.

A while back I read about a book called Learn to Program by Chris Pine. Mr. Pine had created a web site dedicated helping people who had never programmed before learn enough to get started. After the web site had been around a while, and after a bunch of folks had offered their comments and suggestions to improve it, he collected the information from the web site into the book.

The book uses a programming language called Ruby as its teaching tool. Ruby is a fantastic language. It's one of the so-called "fourth generation" scripting languages (along with Python, Perl, JavaScript, and others). The language was designed to scale from the needs of the novice programmer, up to the demands of the professional Software Engineer. For the girls in the club, however, the nice thing about Ruby is that it provides a "Run, Evaluate, Print, Loop" (REPL) tool called IRB (Interactive RuBy). Using IRB, you can type in a Ruby expression and see the results of executing that expression right away. This would provide the great hands-on experience I was looking for in a reasonably controlled environment. More importantly it would run (and run the same way) on my collection of rapidly-approaching-vintage hardware.

I wanted to get a copy of the book for the girls. The Pragmatic Programmers offers many of their books, including this one, in electronic formats (PDF and eBook). I contacted them about a volume or educational discount on a PDF copy of the book. A company representative was kind enough to donate the book for the girls in our club!! You could have knocked me over with a feather. That gift put the train on the track and the wheels in motion.

(In appreciation, let me mention that Learn To Program is available in its Second Edition from The Pragmatic Bookshelf today. This is not an official endorsement by SoftLayer, but it is an enthusiastic recommendation from your humble author who is very grateful for their generous gift).

In the end, the club meeting was on a very rainy day. We struggled to keep the computer equipment dry as we hauled it to the home of one of the club members. Their poor kitchen table became a tangle of cords carrying power and video signals. Using shared screens, and my iPad as a presentation controller, I walked the girls through a Keynote presentation about some of the of the basic concepts of Software Engineering. Then we fired up Ruby in IRB and I showed the girls how to work with numbers, variables, and simple control structures in Ruby. They had to sit three to a computer, but that also let them help one another out. They learned to use loops to print out silly things about me (for example, I had my computer print out "Mr. Thompson rocks!", the girls felt that they absolutely must get their computer to print "Mr. Thompson most certainly does not rock!" 1000 times). There was an awful lot of giggling, but as the teacher I was proud to see them pick up the basic concepts and apply them to their own goals. My favorite exclamation was "Wow! I could use this to help me with my homework."

As a Software Engineer, I spend an awful lot of my time sitting in front of a screen watching text scroll by. My colleges and I have meetings where we work together on hard problems and come up with creative solutions, but just as the computing environments of the day have become more complex, I've become a bit jaded to the discovery and wonder I enjoyed when I poked away at my computer keyboard all those years ago. One of the benefits of volunteering is not what you do for others, but what they can do for you. With the Girls Engineering Club, I got to experience a little of that joy of discovery once again. The price was a little elbow grease, some careful thought, and a bit of my time. It was absolutely a bargain.

Categories: 
August 28, 2008

The Speed of Light is Your Enemy

One of my favorite sites is highscalability.com. As someone with an engineering background, reading about the ways other people solve a variety of problems is really quite interesting.

A recent article talks about the impact of latency on web site viewers. It sounds like common sense that the slower a site is, the more viewers you lose, but what is amazing is that even a latency measured in milliseconds can cost a web site viewers.

The article focuses mainly on application specific solutions to latency, and briefly mentions how to deliver static content like images, videos, documents, etc. There are a couple ways to solve the static content delivery problem such as making your web server as efficient as you can. But that can only help so much. Physics - the speed of light - starts to be your enemy. If you are truly worried about shaving milliseconds off your content delivery time, you have to get your content closer to your viewers.

You can do this yourself by getting servers in datacenters in multiple sites in different geographic locations. This isn't the easiest solution for everyone but does have its advantages such as keeping you in absolute control of your content. The much easier option is to use a CDN (Content Delivery Network).

CDNs are getting more popular and the price is dropping rapidly. Akamai isn't the only game in town anymore and you don't have to pay dollars per GB of traffic or sign a contract with a large commit for a multi-year time frame. CDN traffic costs can be very competitive costing only a few pennies more per Gb compared with traffic costs from a shared or dedicated server. Plus, CDNs optimize their servers for delivering content quickly.

Just to throw some math into the discussion let's see how long it would take an electron to go from New York to San Francisco (4,125,910 meters / 299,792,458 meters per second = 13.7 milliseconds). 13.7 millisconds one way, now double that for the request to go there and the response to return. Now we are up to 27.4 milliseconds. And that is assuming a straight shot with no routers slowing things down. Let's look at Melbourne to London. (16,891,360 meters / 299,792,458 meters per second = 56.3 milliseconds). Now double that, throw in some router overhead and you can see that the delays are starting to be noticeable.

The moral of the story is that for most everybody, distributing static content geographically using a CDN is the right thing to do. That problem has been solved. The harder problem is how to get your application running as efficiently as possible. I'll leave that topic for another time.

-@nday91

Subscribe to engineering