Posts Tagged 'Cloud'

December 11, 2013

2013 at SoftLayer: Year in Review

I'm going into my third year at SoftLayer and it feels like "déjà vu all over again" to quote Yogi Berra. The breakneck pace of innovation, cloud adoption and market consolidation — it only seems to be accelerating.

The BIG NEWS for SoftLayer was announced in July when we became part of IBM. Plenty has already been written about the significance of this acquisition but as our CEO, Lance Crosby, eloquently put it in an earlier blog, "customers and clients from both companies will benefit from a higher level of choice and a higher level of service from a single partner. More important, the real significance will come as we merge technology that we developed within the SoftLayer platform with the power and vision that drives SmartCloud and pioneer next-generation cloud services."

We view our acquisition as an interesting inflection point for the entire cloud computing industry. The acquisition has ramifications that go beyond IaaS market and include both PaaS and SaaS offerings. As the foundation for IBM's SmartCloud offerings, the one-stop-shop for an entire portfolio of cloud services will resonate for startups and large enterprises alike. We're also seeing a market that is rapidly consolidating and only those with global reach, deep pockets, and an established customer base will survive.

With IBM's support and resources, SoftLayer's plans for customer growth and geographic expansion have hit the fast track. News outlets are already abuzz with our plans to open a new data center facility in Hong Kong in the first quarter of next year, and that's just the tip of the iceberg for our extremely ambitious 2014 growth plans. Given the huge influx of opportunities our fellow IBMers are bringing to the table, we're going to be busy building data centers to stay one step ahead of customer demand.

The IBM acquisition generated enough news to devote an entire blog to, but because we've accomplished so much in 2013, I'd be remiss if I didn't create some space to highlight some of the other significant milestones we achieved this year. The primary reason SoftLayer was attractive to IBM in the first place was our history of innovation and technology development, and many of the product announcements and press releases we published this year tell that story.

Big Data and Analytics
Big data has been a key focus for SoftLayer in 2013. With the momentum we generated when we announced our partnership with MongoDB in December of 2012, we've been able to develop and roll out high-performance bare metal solution designers for Basho's Riak platfomr and Cloudera Hadoop. Server virtualization is a phenomenal boon to application servers, but disk-heavy, I/O-intensive operations can easily exhaust the resources of a virtualized environment. Because Riak and Hadoop are two of the most popular platforms for big data architectures, we teamed up with Basho and Cloudera to engineer server configurations that would streamline provisioning and supercharge the operations of their data-rich environments. From the newsroom in 2013:

  • SoftLayer announced the availability of Riak and Riak Enterprise on SoftLayer's IaaS platform. This partnership with Basho gives users the availability, fault tolerance, operational simplicity, and scalability of Riak combined with the flexibility, performance, and agility of SoftLayer's on-demand infrastructure.
  • SoftLayer announced a partnership with Cloudera to provide Hadoop big data solutions in a bare metal cloud environment. These on-demand solutions were designed with Cloudera best practices and are rapidly deployed with SoftLayer's easy-to-use solution designer tool.

Cutting-Edge Customers
Beyond the pure cloud innovation milestones we've hit this year, we've also seen a few key customers in vertical markets do their own innovating on our platform. These companies run the gamut from next generation e-commerce to interactive marketers and game developers who require high performance cloud infrastructure to build and scale the next leading application or game. Some of these game developers and cutting-edge tech companies are pretty amazing and we're glad we tapped into them to tell our story:

  • Asia's hottest tech companies looking to expand their reach globally are relying on SoftLayer's cloud infrastructure to break into new markets. Companies such as Distil Networks, Tiket.com, Simpli.fi, and 6waves are leveraging SoftLayer's Singapore data center to build out their customer base while enabling them to deliver their application or game to users across the region with extremely low latency.
  • In March, we announced that hundreds of the top mobile, PC and social games with more than 100 million active players, are now supported on SoftLayer's infrastructure platform. Gaming companies -- including Hothead Games, Geewa, Grinding Gear Games, Peak Games and Rumble Entertainment -- are flocking to SoftLayer because they can roll out virtual and bare-metal servers along with a suite of networking, security and storage solutions on demand and in real time.

Industry Recognition
SoftLayer's success and growth is a collective effort, however, it is nice to see our founder and CEO, Lance Crosby get some well-deserved recognition. In August, the Metroplex Technology Business Council (MTBC), the largest technology trade association in Texas, named him the winner of its Corporate CEO of the Year during the 13th Annual Tech Titans Awards ceremony.

The prestigious annual contest recognizes outstanding information technology companies and individuals in the North Texas area who have made significant contributions during the past year locally, as well as to the technology industry overall.

We're using the momentum we've continued building in 2013 to propel us into 2014. An upcoming milestone, just around the corner, will be our participation at Pulse 2014 in late February. At this conference we plan to unveil the ongoing integration efforts taking place between SoftLayer and IBM including how;

  • SoftLayer provides flexible, secure, cloud-based infrastructure for running the toughest and most mission critical workloads on the cloud;
  • SoftLayer is the foundation of IBM PaaS offerings for cloud-native application development and deployment;
  • SoftLayer is the platform for many of IBM SaaS offerings supporting mobile, social and analytic applications. IBM has a growing portfolio of roughly 110 SaaS applications.

Joining forces with IBM will have its challenges but the opportunities ahead looks amazing. We encourage you to watch this space for even more activity next year and join us at Pulse 2014 in Las Vegas.

-Andre

October 24, 2013

Why Hybrid? Why Now?

As off-premise cloud computing adoption continues to grow in a non-linear fashion, a growing number of businesses running in-house IT environments are debating whether they should get on board as well. If you've been part of any of those conversations, you've tried to balance the hype with the most significant questions for your business: "How do we know if our company is ready to try cloud resources? And if we're ready, how do we actually get started?"

Your company is cloud-ready as soon as you understand and accept the ramifications of remote resources and scaling in the cloud model, and it doesn't have to be an "all-in" decision. If you need certain pieces of your infrastructure to reside in-house, you can start evaluating the cloud with workloads that don't have to be hosted internally. The traditional IT term for this approach is "hybrid," but that term might cause confusion these days.

In the simplest sense, a hybrid model is one in which a workload is handled by one or more non-heterogeneous elements. In the traditional IT sense, those non-heterogeneous elements are two distinct operating environments (on-prem and off-prem). In SoftLayer's world, a hybrid environment leverages different heterogeneous elements: Bare metal and virtual server instances, delivered in the cloud.

Figure 1: Traditional Hybrid - On-Premise to Cloud (Through VPN, SSL or Open Communications)

Traditional Hybrid

Figure 2: SoftLayer's Hybrid - Dedicated + Virtual

SoftLayer Hybrid

Because SoftLayer's "hybrid" and traditional IT's "hybrid" are so different, it's easy to understand the confusion in the marketplace: If a hybrid environment is generally understood to involve the connection of on-premise infrastructure to cloud resources, SoftLayer's definition seems contrarian. Actually, the use of the term is a lot more similar than I expected. In a traditional hosting environment, most businesses think in terms of bare metal (dedicated) servers, and when those businesses move "to the cloud," they're generally thinking in terms of virtualized server instances. So SoftLayer's definition of a hybrid environment is very consistent with the market definition ... It's just all hosted off-premise.

The ability to have dedicated resources intermixed with virtual resources means that workloads from on-premise hypervisors that require native or near-native performance can be moved immediately. And because those workloads don't have to be powered by in-house servers, a company's IT infrastructure moves a CapEx to an OpEx model. In the past, adopting infrastructure as a service (IaaS) involved shoehorning workloads into whichever virtual resource closest matched an existing environment, but those days are gone. Now, on-premise resources can be replicated (and upgraded) on demand in a single off-premise environment, leveraging a mix of virtual and dedicated resources.

SoftLayer's environment simplifies the process for businesses looking to move IT infrastructure off-premise. Those businesses can start by leveraging virtual server instances in a cloud environment while maintaining the in-house resources for certain workloads, and when those in-house resources reach the end of their usable life (or need an upgrade), the businesses can shift those workloads onto bare metal servers in the same cloud environment as their virtual server instances.

The real-world applications are pretty obvious: Your company is considering moving part of a workload to cloud in order to handle peak season loads at the end of the year. You've contemplated transitioning parts of your environment to the cloud, but you've convinced yourself that shared resource pools are too inefficient and full of noisy neighbor problems, so you'd never be able to move your core infrastructure to the same environment. Furthering the dilemma, you have to capitalize on the assets you already have that are still of use to the company.

You finally have the flexibility to slowly transition your environment to a scalable, flexible cloud environment without sacrificing. While the initial setup phases for a hybrid environment may seem arduous, Rome wasn't built in a day, so you shouldn't feel pressure to rush the construction of your IT environment. Here are a few key points to consider when adopting a hybrid model that will make life easier:

  • Keep it simple. Don't overcomplicate your environment. Keep networks, topologies and methodologies simple, and they'll be much more manageable and scalable.
  • Keep it secure. Simple, robust security principles will reduce your deployment timeframe and reduce attack points.
  • Keep it sane. Hybrid mixes the best of both worlds, so chose the best assets to move over. "Best" does not necessarily mean "easiest" or "cheapest" workload, but it doesn't exclude those workloads either.

With this in mind, you're ready to take on a hybrid approach for your infrastructure. There's no certification for when your company finally becomes a "cloud company." The moment you start leveraging off-premise resources, you've got a hybrid environment, and you can adjust your mix of on-premise, off-premise, virtual and bare metal resources as your business needs change and evolve.

-Jeff Klink

Jeff Klink is a senior technical staff member (STSM) with IBM Canada.

August 19, 2013

The 5 Mortal Sins of Launching a Social Game

Social network games have revolutionized the gaming industry and created an impressive footprint on the Web as a whole. 235 million people play games on Facebook every month, and some estimates say that by 2014, more than one third of Internet population will be playing social games. Given that market, it's no wonder that the vast majority of game studios, small or big, have prioritized games to be played on Facebook, Orkut, StudiVZ, VK and other social networks.

Developing and launching a game in general is not an easy task. It takes a lot of time, a lot of people, a lot of planning and a lot of assumptions. On top of those operational challenges, the social gaming market is a jungle where "survival of the fittest" is a very, VERY visible reality: One day everyone is growing tomatoes, the next they are bad guys taking over a city, and the next they are crushing candies. An army of genius developers with the most stunning designs and super-engaging game ideas can find it difficult to navigate the fickle social waters, but in the midst of all of that uncertainty, the most successful gaming studios have all avoided five of the most common mortal sins gaming companies commit when launching a social game.

SoftLayer isn't gaming studio, and we don't have any blockbuster games of our own, but we support some of the most creative and successful gaming companies in the world, so we have a ton of indirect experience and perspective on the market. In fact, leading up to GDC Europe, I was speaking with a few of the brilliant people from KUULUU — an interactive entertainment company that creates social games for leading artists, celebrities and communities — about a new Facebook game they've been working on called LINKIN PARK RECHARGE:

After learning a more about how Kuuluu streamlines the process of developing and launching a new title, I started thinking about the market in general and the common mistakes most game developers make when they release a social game. So without further ado...

The 5 Mortal Sins of Launching a Social Game

1. Infinite Focus

Treat focus as limited resource. If it helps, look at your team's cumulative capacity to focus as though it's a single cube. To dedicate focus to different parts of the game or application, you'll need to slice the cube. The more pieces you create, the thinner the slices will be, and you'll be devoting less focus to the most important pieces (which often results in worse quality). If you're diverting a significant amount of attention from building out the game's story line to perfecting the textures of a character's hair or the grass on the ground, you'll wind up with an aesthetically beautiful game that no one wants to play. Of course that example is an extreme, but it's not uncommon for game developers to fall into a less blatant trap like spending time building and managing hosting infrastructure that could better be spent tweaking and improving in-game performance.

2. Eeny, Meeny, Miny, Moe – Geographic Targeting

Don't underestimate the power of the Internet and its social and viral drivers. You might believe your game will take off in Germany, but when you're publishing to a global social network, you need to be able to respond if your game becomes hugely popular in Seoul. A few enthusiastic Tweets or wall post from the alpha-players in Korea might be the catalyst that takes your user base in the region from 1000 to 80,000 overnight to 2,000,000 in a week. With that boom in demand, you need to have the flexibility to supply that new market with the best quality service ... And having your entire infrastructure in a single facility in Europe won't make for the best user experience in Asia. Keep an eye on the traction your game has in various regions and geolocate your content closer to the markets where you're seeing the most success.

3. They Love Us, so They'll Forgive Us.

Often, a game's success can lure gaming companies into a false sense of security. Think about it in terms of the point above: 2,000,000 Koreans are trying to play your game a week after a great article is published about you, but you don't make any changes to serve that unexpected audience. What happens? Players time out, latency drags the performance of your game to a crawl, and 2,000,000 users are clicking away to play one of the other 10,000 games on Facebook or 160,000 games in a mobile appstore. Gamers are fickle, and they demand high performance. If they experience anything less than a seamless experience, they're likely to spend their time and money elsewhere. Obviously, there's a unique balance for every game: A handful of players will be understanding to the fact that you underestimated the amount of incoming requests, that you need time to add extra infrastructure or move it elsewhere to decrease latency, but even those players will get impatient when they experience lag and downtime.

KUULUU took on this challenge in an innovative, automated way. They monitor the performance of all of their games and immediately ramp up infrastructure resources to accommodate growth in demand in specific areas. When demand shifts from one of their games to another, they're able to balance their infrastructure accordingly to deliver the best end-user experience at all times.

4. We Will Be Thiiiiiiiiiiis Successful.

Don't count your chickens before the eggs hatch. You never really, REALLY know how a social game will perform when the viral factor influences a game's popularity so dramatically. Your finite plans and expectations wind up being a list of guestimations and wishes. It's great to be optimistic and have faith in your game, but you should never have to over-commit resources "just in case." If your game takes two months to get the significant traction you expect, the infrastructure you built to meet those expectations will be underutilized for two months. On the other hand, if your game attracts four times as many players as you expected, you risk overburdening your resources as you scramble to build out servers. This uncertainty is one of the biggest drivers to cloud computing, and it leads us to the last mortal sin of launching a social game ...

5. Public Cloud Is the Answer to Everything.

To all those bravados who feel they are the master of cloud and see it as an answer to all their problems please, for your fans sake, remember the cloud has more than one flavor. Virtual instances in a public cloud environment can be provisioned within minutes are awesome for your webservers, but they may not perform well for your databases or processor-intensive requirements. KUULUU chose to incorporate bare metal cloud into a hybrid environment where a combination of virtual and dedicated resources work together to provide incredible results:

LP RECHARGE

Avoiding these five mortal sins doesn't guarantee success for your social game, but at the very least, you'll sidestep a few common landmines. For more information on KUULUU's success with SoftLayer, check out this case study.

-Michalina

July 29, 2013

A Brief History of Cloud Computing

Believe it or not, "cloud computing" concepts date back to the 1950s when large-scale mainframes were made available to schools and corporations. The mainframe's colossal hardware infrastructure was installed in what could literally be called a "server room" (since the room would generally only be able to hold a single mainframe), and multiple users were able to access the mainframe via "dumb terminals" – stations whose sole function was to facilitate access to the mainframes. Due to the cost of buying and maintaining mainframes, an organization wouldn't be able to afford a mainframe for each user, so it became practice to allow multiple users to share access to the same data storage layer and CPU power from any station. By enabling shared mainframe access, an organization would get a better return on its investment in this sophisticated piece of technology.

Mainframe Computer

A couple decades later in the 1970s, IBM released an operating system called VM that allowed admins on their System/370 mainframe systems to have multiple virtual systems, or "Virtual Machines" (VMs) on a single physical node. The VM operating system took the 1950s application of shared access of a mainframe to the next level by allowing multiple distinct compute environments to live in the same physical environment. Most of the basic functions of any virtualization software that you see nowadays can be traced back to this early VM OS: Every VM could run custom operating systems or guest operating systems that had their "own" memory, CPU, and hard drives along with CD-ROMs, keyboards and networking, despite the fact that all of those resources would be shared. "Virtualization" became a technology driver, and it became a huge catalyst for some of the biggest evolutions in communications and computing.

Mainframe Computer

In the 1990s, telecommunications companies that had historically only offered single dedicated point–to-point data connections started offering virtualized private network connections with the same service quality as their dedicated services at a reduced cost. Rather than building out physical infrastructure to allow for more users to have their own connections, telco companies were able to provide users with shared access to the same physical infrastructure. This change allowed the telcos to shift traffic as necessary to allow for better network balance and more control over bandwidth usage. Meanwhile, virtualization for PC-based systems started in earnest, and as the Internet became more accessible, the next logical step was to take virtualization online.

If you were in the market to buy servers ten or twenty years ago, you know that the costs of physical hardware, while not at the same level as the mainframes of the 1950s, were pretty outrageous. As more and more people expressed demand to get online, the costs had to come out of the stratosphere, and one of the ways that was made possible was by ... you guessed it ... virtualization. Servers were virtualized into shared hosting environments, Virtual Private Servers, and Virtual Dedicated Servers using the same types of functionality provided by the VM OS in the 1950s. As an example of what that looked like in practice, let's say your company required 13 physical systems to run your sites and applications. With virtualization, you can take those 13 distinct systems and split them up between two physical nodes. Obviously, this kind of environment saves on infrastructure costs and minimizes the amount of actual hardware you would need to meet your company's needs.

Virtualization

As the costs of server hardware slowly came down, more users were able to purchase their own dedicated servers, and they started running into a different kind of problem: One server isn't enough to provide the resources I need. The market shifted from a belief that "these servers are expensive, let's split them up" to "these servers are cheap, let's figure out how to combine them." Because of that shift, the most basic understanding of "cloud computing" was born online. By installing and configuring a piece of software called a hypervisor across multiple physical nodes, a system would present all of the environment's resources as though those resources were in a single physical node. To help visualize that environment, technologists used terms like "utility computing" and "cloud computing" since the sum of the parts seemed to become a nebulous blob of computing resources that you could then segment out as needed (like telcos did in the 90s). In these cloud computing environments, it became easy add resources to the "cloud": Just add another server to the rack and configure it to become part of the bigger system.

Clouds

As technologies and hypervisors got better at reliably sharing and delivering resources, many enterprising companies decided to start carving up the bigger environment to make the cloud's benefits to users who don't happen to have an abundance of physical servers available to create their own cloud computing infrastructure. Those users could order "cloud computing instances" (also known as "cloud servers") by ordering the resources they need from the larger pool of available cloud resources, and because the servers are already online, the process of "powering up" a new instance or server is almost instantaneous. Because little overhead is involved for the owner of the cloud computing environment when a new instance is ordered or cancelled (since it's all handled by the cloud's software), management of the environment is much easier. Most companies today operate with this idea of "the cloud" as the current definition, but SoftLayer isn't "most companies."

SoftLayer took the idea of a cloud computing environment and pulled it back one more step: Instead of installing software on a cluster of machines to allow for users to grab pieces, we built a platform that could automate all of the manual aspects of bringing a server online without a hypervisor on the server. We call this platform "IMS." What hypervisors and virtualization do for a group of servers, IMS does for an entire data center. As a result, you can order a bare metal server with all of the resources you need and without any unnecessary software installed, and that server will be delivered to you in a matter of hours. Without a hypervisor layer between your operating system and the bare metal hardware, your servers perform better. Because we automate almost everything in our data centers, you're able to spin up load balancers and firewalls and storage devices on demand and turn them off when you're done with them. Other providers have cloud-enabled servers. We have cloud-enabled data centers.

SoftLayer Pod

IBM and SoftLayer are leading the drive toward wider adoption of innovative cloud services, and we have ambitious goals for the future. If you think we've come a long way from the mainframes of the 1950s, you ain't seen nothin' yet.

-James

Categories: 
May 15, 2013

Secure Quorum: Tech Partner Spotlight

We invite each of our featured SoftLayer Tech Marketplace Partners to contribute a guest post to the SoftLayer Blog, and this week, we’re happy to welcome Gerard Ibarra from Secure Quorum. Secure Quorum is an easy-to-use emergency notification system and crisis management system that resides in the cloud.

Are You Prepared for an Emergency?

Every company's management team faces the challenge of having too many things going on with not enough time in the day. It's difficult to get everything done, so when push comes to shove, particular projects and issues need to be prioritized to be completed. What do we have to do today that can't be put off to tomorrow? Often, a businesses fall into a reactionary rut where they are constantly "putting out the fires" first, and while it's vital for a business to put out those fires (literal or metaphorical), that approach makes it difficult to proactively prepare for those kinds of issues to streamline the process of resolving them. Secure Quorum was created to provide a simple, secure medium to deal with emergencies and incidents.

What we noticed was that businesses didn't often consider planning for emergencies as part of their operations. The emergencies I'm talking about thankfully don't happen often, but fires, accidents, power outages, workplace violence and denial of service attacks can severely impact the bottom line if they aren't addressed quickly ... They can make or break you. Are you prepared?

Every second that we fail to make informed and logical decisions during an emergency is time lost in taking action. Take these facts for a little perspective:

  • "Property destruction and business disruption due to disasters now rival warfare in terms of loss." (University Corporation for Atmospheric Research)
  • More than 10,000 severe thunderstorms, 2,500 floods, 1,000 tornadoes and 10 hurricanes affect the United States each year. On average, 500 people die yearly because of severe weather and floods. (National Weather News 2005)
  • The cost of natural disasters is rising. During the past two decades, natural disaster damage costs have exceeded the $500 billion mark. Only 17 percent of that figure was covered by insurance. (Dennis S. Mileti, Disasters by Design)
  • Losses as a result of global disasters continue to increase on average every year, with an estimated $360 billion USD lost in 2011. (Centre for Research in the Epidemiology of Disasters)
  • Natural disasters, power outages, IT failures and human error are common causes of disruptions to internal and external communications. They "can cause downtime and have a significant negative impact on employee productivity, customer retention, and the confidence of vendors, partners, and customers." (Debra Chin, Palmer Research, May 2011)

These kinds of "emergencies" are not going away, but because specific emergencies are difficult (if not impossible) to predict, it's not obvious how to deal with them. How do we reduce risk for our employees, vendors, customers and our business? The two best answers to that question are to have a business continuity plan (BCP) and to have a way to communicate and collaborate in the midst of an emergency.

Start with a BCP. A BCP is a strategic plan to help identify and mitigate risk. Investopedia gives a great explanation:

The creation of a strategy through the recognition of threats and risks facing a company, with an eye to ensure that personnel and assets are protected and able to function in the event of a disaster. Business continuity planning (BCP) involves defining potential risks, determining how those risks will affect operations, implementing safeguards and procedures designed to mitigate those risks, testing those procedures to ensure that they work, and periodically reviewing the process to make sure that it is up to date.

Make sure you understand the basics of a BCP, and look for cues from organizations like FEMA for examples of how to approach emergency situations: http://www.ready.gov/business-continuity-planning-suite.

Once you have a basic BCP in place, it's important to be able to execute it when necessary ... That's where an emergency communication and collaboration solution comes into play. You need to streamline how you communicate when an emergency occurs, and if you're relying on a manual process like a phone tree to spread the word and contact key stakeholders in the midst of an incident, you're wasting time that could better be spent focusing to the issue at hand. An emergency communication solution automates that process quickly and logically.

When you create a BCP, you consider which people in your organization are key to responding to specific types of emergencies, and if anything ever happens, you want to get all of those people together. An emergency communication system will collect the relevant information, send it to the relevant people in your organization and seamlessly bridge them into a secured conference call. What would take minutes to complete now takes seconds, and when it comes to responding to these kinds of issues, seconds count. With everyone on a secure call, decisions can be made quickly and recorded to inform employees and stakeholders of what occurred and what the next steps are.

Plan for emergencies and hope that you never have to use that plan. Think about preparing for emergencies strategically, and it could make all the difference in the world. Secure Quorum is a platform that makes it easy to communicate and collaborate quickly, reliably and securely in those high-stress situations, so if you're interested getting help when it comes to responding to emergencies and incidents, visit our site at SecureQuorum.com and check out the whitepaper we just published with one of our customers: Ease of Use: Make it Part of Your Software Decision.

-Gerard Ibarra, CEO of Secure Quorum

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
February 18, 2013

What Happen[ed] in Vegas - Parallels Summit 2013

The Las Vegas Convention and Visitors Authority says, "What happens in Vegas, stays in Vegas," but we absconded from Caesars Palace with far too many pictures and videos from Parallels Summit to adhere to their suggestion. Over the course of three days, attendees stayed busy with presentations, networking sessions, parties, cocktails and (of course) the Server Challenge II. And thanks to Alan's astute questions in The Hangover, we didn't have to ask if the hotel was pager-friendly, whether a payphone bank was available or if Caesar actually lived at the hotel ... We could focus on the business at hand.

This year, Parallels structured the conference around three distinct tracks — Business, Technical and Developer — to focus all of the presentations for their most relevant audiences, and as a result, Parallels Summit engaged a broader, more diverse crowd than ever before. Many of the presentations were specifically geared toward the future of the cloud and how businesses can innovate to leverage the cloud's potential. With all of that buzz around the cloud and innovation, SoftLayer felt right at home. We were also right at home when it came to partying.

SoftLayer was a proud sponsor of the massive Parallels Summit party at PURE Nightclub in Caesar's palace on the second night of the conference. With respect to the "What Happens in Vegas" tagline, we actually powered down our recording devices to let the crowd enjoy the jugglers, acrobats, drinks and music without fear of incriminating pictures winding up on Facebook. Don't worry, though ... We made up for that radio silence by getting a little extra coverage of the epic Server Challenge II competition.

More than one hundred attendees stepped up to reassemble our rack of Supermicro servers, and the competition was fierce. The top two times were fifty-nine hundredths of a second apart from each other, and it took a blazingly fast time of 1:25.00 to even make the leader board. As the challenge heated up, we were able to capture video of the top three competitors (to be used as study materials for all competitors at future events):

It's pretty amazing to see the cult following that the Server Challenge is starting to form, but it's not very surprising. Given how intense some of these contests have been, people are scouting our events page for their next opportunity to step up to the server rack, and I wouldn't be surprised to see that people are mocking up their own Server Challenge racks at home to hone their strategy. A few of our friends on Twitter hinted that they're in training to dominate the next time they compete, so we're preparing for the crowds to get bigger and for the times to keep dropping.

If you weren't able to attend the show, Parallels posted video from two of the keynote presentations, and shared several of the presentation slide decks on the Parallels Summit Agenda. You might not get the full experience of networking, partying or competing in the Server Challenge, but you can still learn a lot.

Viva Las Vegas! Viva Parallels! Viva SoftLayer!

-Kevin

January 28, 2013

Catalyst: In the Startup Sauna and Slush

Slush.fi was a victim of its own success. In November 2012, the website home of Startup Sauna's early-stage startup conference was crippled by an unexpected flood of site traffic, and they had to take immediate action. Should they get a private MySQL instance from their current host to try and accommodate the traffic or should they move their site to the SoftLayer cloud? Spoiler (highlight for clue): You're reading this post on the SoftLayer Blog.

Let me back up for a second and tell you a little about Startup Sauna and Slush. Startup Sauna hosts (among other things) a Helsinki-based seed accelerator program for early-stage startup companies from Northern Europe and Russia. They run two five-week programs every year, with more than one hundred graduated companies to date. In addition to the accelerator program, Startup Sauna also puts on annually the biggest startup conference in Northern europe called Slush. Slush was founded in 2008 with the intent to bring the local startup scene together at least once every year. Now — five years later — Slush brings more international investors and media to the region than any other event out there. This year alone, 3,500 entrepreneurs, investors and partners who converged on Slush to make connections and see the region's most creative and innovative businesses, products and services.

Slush Conference

In October of last year, we met the founders of Startup Sauna, and it was clear that they would be a perfect fit to join Catalyst. We offer their portfolio companies free credits for cloud and dedicated hosting, and we really try get to know the teams and alumni. Because Startup Sauna signed on just before Slush 2012 in November, they didn't want to rock the boat by moving their site to SoftLayer before the conference. Little did we know that they'd end up needing to make the transition during the conference.

When the event started, the Slush website was inundated with traffic. Attendees were checking the agenda and learning about some of the featured startups, and the live stream of the presentation brought record numbers of unique visitors and views. That's all great news ... Until those "record numbers" pushed the site's infrastructure to its limit. Startup Sauna CTO Lari Haataja described what happened:

The number of participants had definitely most impact on our operations. The Slush website was hosted on a standard webhotel (not by SoftLayer), and due to the tremendous traffic we faced some major problems. Everyone was busy during the first morning, and it took until noon before we had time to respond to the messages about our website not responding. Our Google Analytics were on fire, especially when Jolla took the stage to announce their big launch. We were streaming the whole program live, and anyone who wasn't able to attend the conference wanted to be the first to know about what was happening.

The Slush website was hosted on a shared MySQL instance with a limited number of open connections, so when those connections were maxed out (quickly) by site visitors from 134 different countries, database errors abounded. The Startup Sauna team knew that a drastic change was needed to get the site back online and accessible, so they provisioned a SoftLayer cloud server and moved their site to its new home. In less than two hours (much of the time being spent waiting for files to be downloaded and for DNS changes to be recognized), the site was back online and able to accommodate the record volume of traffic.

You've seen a few of these cautionary tales before on the SoftLayer Blog, and that's because these kinds of experiences are all too common. You dream about getting hundreds of thousands of visitors, but when those visitors come, you have to be ready for them. If you have an awesome startup and you want to learn more about the Startup Sauna, swing by Helsinki this week. SoftLayer Chief Strategy Officer George Karidis will be in town, and we plan on taking the Sauna family (and anyone else interested) out for drinks on January 31! Drop me a line in a comment here or over on Twitter, and I'll make sure you get details.

-@EmilyBlitz

Categories: 
December 31, 2012

FatCloud: Tech Partner Spotlight

We invite each of our featured SoftLayer Tech Marketplace Partners to contribute a guest post to the SoftLayer Blog, and this week, we're happy to welcome Ian Miller, CEO of FatCloud. FatCloud is a cloud-enabled application platform that allows enterprises to build, deploy and manage next-generation .NET applications.

'The Cloud' and Agility

As the CEO of a cloud-enabled application platform for the .NET community, I get the same basic question all the time: "What is the cloud?" I'm a consumer of cloud services and a supplier of software that helps customers take advantage of the cloud, so my answer to that question has evolved over the years, and I've come to realize that the cloud is fundamentally about agility. The growth, evolution and adoption of cloud technology have been fueled by businesses that don't want to worry about infrastructure and need to pivot or scale quickly as their needs change.

Because FatCloud is a consumer of cloud infrastructure from Softlayer, we are much more nimble than we'd be if we had to worry about building data centers, provisioning hardware, patching software and doing all the other time-consuming tasks that are involved in managing a server farm. My team can focus on building innovative software with confidence that the infrastructure will be ready for us on-demand when we need it. That peace of mind also happens to be one of the biggest reasons developers turn to FatCloud ... They don't want to worry about configuring the fundamental components of the platform under their applications.

Fat Cloud

Our customers trust FatCloud's software platform to help them build and scale their .NET applications more efficiently. To do this, we provide a Core Foundation of .NET WCF services that effectively provides the "plumbing" for .NET cloud computing, and we offer premium features like a a distributed NoSQL database, work queue, file storage/management system, content caching and an easy-to-use administration tool that simplifies managing the cloud for our customers. FatCloud makes developing for hundreds of servers as easy as developing for one, and to prove it, we offer a free 3-node developer edition so that potential customers can see for themselves.

FatCloud Offering

The agility of the cloud has the clearest value for a company like ours. In one heavy-duty testing month, we needed 75 additional servers online, and after that testing was over, we needed the elasticity to scale that infrastructure back down. We're able to adjust our server footprint as we balance our computing needs and work within budget constraints. Ten years ago, that would have been overwhelmingly expensive (if not impossible). Today, we're able to do it economically and in real-time. SoftLayer is helping keep FatCloud agile, and FatCloud passes that agility on to our customers.

Companies developing custom software for the cloud, mobile or web using .NET want a reliable foundation to build from, and they want to be able to bring their applications to market faster. With FatCloud, those developers can complete their projects in about half the time it would take them if they were to develop conventionally, and that speed can be a huge competitive differentiator.

The expensive "scale up" approach of buying and upgrading powerful machines for something like SQL Server is out-of-date now. The new kid in town is the "scale out" approach of using low-cost servers to expand infrastructure horizontally. You'll never run into those "scale up" hardware limitations, and you can build a dynamic, scalable and elastic application much more economically. You can be agile.

If you have questions about how FatCloud and SoftLayer make cloud-enabled .NET development easier, send us an email: sales@fatcloud.com. Our team is always happy to share the easy (and free) steps you can take to start taking advantage of the agility the cloud provides.

-Ian Miller, CEO of FatCloud

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace. These partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New partners will be added to the Marketplace each month, so stay tuned for many more come.
December 4, 2012

Big Data at SoftLayer: MongoDB

In one day, Facebook's databases ingest more than 500 terabytes of data, Twitter processes 500 million Tweets and Tumblr users publish more than 75 million posts. With such an unprecedented volume of information, developers face significant challenges when it comes to building an application's architecture and choosing its infrastructure. As a result, demand has exploded for "big data" solutions — resources that make it possible to process, store, analyze, search and deliver data from large, complex data sets. In light of that demand, SoftLayer has been working in strategic partnership with 10gen — the creators of MongoDB — to develop a high-performance, on-demand, big data solution. Today, we're excited to announce the launch of specialized MongoDB servers at SoftLayer.

If you've configured an infrastructure to accommodate big data, you know how much of a pain it can be: You choose your hardware, you configure it to run NoSQL, you install an open source NoSQL project that you think will meet your needs, and you keep tweaking your environment to optimize its performance. Assuming you have the resources (and patience) to get everything running efficiently, you'll wind up with the horizontally scalable database infrastructure you need to handle the volume of content you and your users create and consume. SoftLayer and 10gen are making that process a whole lot easier.

Our new MongoDB solutions take the time and guesswork out of configuring a big data environment. We give you an easy-to-use system for designing and ordering everything you need. You can start with a single server or roll out multiple servers in a single replica set across multiple data centers, and in under two hours, an optimized MongoDB environment is provisioned and ready to be used. I stress that it's an "optimized" environment because that's been our key focus. We collaborated with 10gen engineers on hardware and software configurations that provide the most robust performance for MongoDB, and we incorporated many of their MongoDB best practices. The resulting "engineered servers" are big data powerhouses:

MongoDB Configs

From each engineered server base configuration, you can customize your MongoDB server to meet your application's needs, and as you choose your upgrades from the base configuration, you'll see the thresholds at which you should consider upgrading other components. As your data set's size and the number of indexes in your database increase, you'll need additional RAM, CPU, and storage resources, but you won't need them in the same proportions — certain components become bottlenecks before others. Sure, you could upgrade all of the components in a given database server at the same rate, but if, say, you update everything when you only need to upgrade RAM, you'd be adding (and paying for) unnecessary CPU and storage capacity.

Using our new Solution Designer, it's very easy to graphically design a complex multi-site replica set. Once you finalize your locations and server configurations, you'll click "Order," and our automated provisioning system will kick into high gear. It deploys your server hardware, installs CentOS (with OS optimizations to provide MongoDB performance enhancements), installs MongoDB, installs MMS (MongoDB Monitoring Service) and configures the network connection on each server to cluster it with the other servers in your environment. A process that may have taken days of work and months of tweaking is completed in less than four hours. And because everything is standardized and automated, you run much less risk of human error.

MongoDB Configs

One of the other massive benefits of working so closely with 10gen is that we've been able to integrate 10gen's MongoDB Cloud Subscriptions into our offering. Customers who opt for a MongoDB Cloud Subscription get additional MongoDB features (like SSL and SNMP support) and support direct from the MongoDB authority. As an added bonus, since the 10gen team has an intimate understanding of the SoftLayer environment, they'll be able to provide even better support to SoftLayer customers!

You shouldn't have to sacrifice agility for performance, and you shouldn't have to sacrifice performance for agility. Most of the "big data" offerings in the market today are built on virtual servers that can be provisioned quickly but offer meager performance levels relative to running the same database on bare metal infrastructure. To get the performance benefits of dedicated hardware, many users have chosen to build, roll out and tweak their own configurations. With our MongoDB offering, you get the on-demand availability and flexibility of a cloud infrastructure with the raw power and full control of dedicated hardware.

If you've been toying with the idea of rolling out your own big data infrastructure, life just got a lot better for you.

-Duke

October 8, 2012

Don't Let Your Success Bring You Down

Last week, I got an email from a huge technology conference about their new website, exciting new speaker line up and the availability of early-bird tickets. I clicked on a link from that email, and I find that their fancy new website was down. After giving up on getting my early-bird discount, I surfed over to Facebook, and I noticed a post from one of my favorite blogs, Dutch Cowboys, about another company's interesting new product release. I clicked the link to check out the product, and THAT site was down, too. It's painfully common for some of the world's most popular sites and applications buckle under the strain of their own success ... Just think back to when Diablo III was launched: Demand crushed their servers on release day, and the gamers who waited patiently to get online with their copy turned to the world of social media to express their visceral anger about not being able to play the game.

The question everyone asks is why this kind of thing still happens. To a certain extent, the reality is that most entrepreneurs don't know what they don't know. I spoke with an woman who was going to be featured on BBC's Dragons' Den, and she said that the traffic from the show's viewers crippled most (if not all) of the businesses that were presented on the program. She needed to safeguard from that happening to her site, and she didn't know how to do that.

Fortunately, it's pretty easy to keep sites and applications online with on-demand infrastructure and auto-scaling tools. Unfortunately, most business owners don't know how easy it is, so they don't take advantage of the resources available to them. Preparing a website, game or application for its own success doesn't have to be expensive or time consuming. With pay-for-what-you-use pricing and "off the shelf" cloud management solutions, traffic-caused outages do NOT have to happen.

First impressions are extremely valuable, and if I wasn't really interested in that conference or the new product Dutch Cowboys blogged about, I'd probably never go back to those sites. Most Internet visitors would not. I cringe to think about the potential customers lost.

Businesses spend a lot of time and energy on user experience and design, and they don't think to devote the same level of energy on their infrastructure. In the 90's, sites crashing or slowing was somewhat acceptable since the interwebs were exploding beyond available infrastructure's capabilities. Now, there's no excuse.

If you're launching a new site, product or application, how do you get started?

The first thing you need to do is understand what resources you need and where the potential bottlenecks are when hundreds, thousands or even millions of people want to what you're launching. You don't need to invest in infrastructure to accommodate all of that traffic, but you need to know how you can add that infrastructure when you need it.

One of the easiest ways to prepare for your own success without getting bogged down by the bits and bytes is to take advantage of resources from some of our technology partners (and friends). If you have a PHP, Ruby on Rails or Node.js applications, Engine Yard will help you deploy and manage a specialized hosting environment. When you need a little more flexibility, RightScale's cloud management product lets you easily manage your environment in "a single integrated solution for extreme efficiency, speed and control." If your biggest concern is your database's performance and scalability, Cloudant has an excellent cloud database management service.

Invest a little time in getting ready for your success, and you won't need to play catch-up when that success comes to you. Given how easy it is to prepare and protect your hosting environment these days, outages should go the way of the 8-track player.

-@jpwisler

Subscribe to cloud