Author Archive: Guest Contributor

May 24, 2016

Streamlining the VMware licenses ordering process

IBM and VMware’s agreement (announced in February) enables enterprise customers to extend their existing on-premises workloads to the cloud—specifically, the IBM Cloud. Customers can now leverage VMware technologies with IBM’s worldwide cloud data centers, giving them the power to scale globally without incurring CAPEX and reducing security risks.

So what does this mean to customers’ VMware administrators? They can quickly realize cost-effective hybrid cloud characteristics by deploying into SoftLayer’s enterprise-grade global cloud platform (VMware@SoftLayer). One of these characteristics is that vSphere workloads and catalogs can be provisioned onto VMware vSphere environments within SoftLayer's data centers without modification to VMware VMs or guests. The use of a common vSphere hypervisor and management/orchestration platform make these deployments possible.

vSphere implementations on SoftLayer also enable utilization of other components. Table 1 contains a list of VMware products that are now available for ordering through the SoftLayer customer portal. Note that prices are subject to change. Visit VMware Solutions for the most current pricing.

Product Name

Customer List Price

VMware vCenter Server Standard

Included with vSphere

VMware vSphere Enterprise Plus

Starting at $85 per processor per month

VMware vRealize Suite [Includes VMware vRealize (Standard Edition), vRealize Log Insight, and vRealize Automation (Standard Edition)]

Starting at $48 per processor per month

VMware vRealize Operations Enterprise Edition

Starting at $68 per processor per month

VMware vRealize Operations Advanced Edition

Starting at $33 per processor per month

VMware vRealize Automation Enterprise

Starting at $150 per processor per month

VMware vRealize Automation Advanced

Starting at $75 per processor per month

VMware NSX-V

Starting at $118 per processor per month

VMware Integrated OpenStack (VIO)

Starting at $11 per processor per month

Virtual SAN Standard Tier 1 (0-20 TB)

Contact SoftLayer Sales for pricing

Virtual SAN Standard Tier 2 (21-64 TB)

Contact SoftLayer Sales for pricing

Virtual SAN Standard Tier 3 (65-124 TB)

Contact SoftLayer Sales for pricing

VMware Site Recovery Manager (SRM)

Starting at $257 per processor per month

Table 1. VMware products available in the SoftLayer Customer Portal

Use the following steps to order licenses for the VMware products listed in Table 1:

  1. Log in to the SoftLayer customer portal.
  2. Click Devices > Managed > VMware Licenses.

Steps to VMware Licenses page

Figure 1. Steps to VMware Licenses page

  1. Click on Order VMware Licenses in the top right-hand corner of the VMware Licenses page.

Order VMware Licenses

Figure 2. Order VMware Licenses

  1. To list the VMware products and number of CPUs for the licenses you want to order (Figure 3), click the drop-down list under Clicking Add License.

Note: VMware vSphere Enterprise Plus (ESXi 6.0) cannot be ordered through this process. You must still order it as a requested OS when you order your bare metal server.



Select the VMware product and number of CPUs

Figure 3. Select the VMware product and number of CPUs

  1. View the price of the VMware product you selected on the far right of the screen.

View your selection before continuing the ordering process

Figure 4. View your selection before continuing the ordering process

  1. Click Continue to order the licenses or you can click Add License to add additional licenses.

Once you click Continue, you are taken back to the VMware Licenses page, which displays your VMware product(s) and license key(s).

List of VMware products and license keys

Figure 5. List of VMware products and license keys

  1. Download the Install Files from the link on this page. You will need to have an SSL connection to the SoftLayer private network to be able to access the download page.
  2. Download the correct VMware product(s) and manually install them into your vSphere environment.

 

- Kerry Staples

Categories: 
May 19, 2016

Bringing the power of GPUs to cloud

The GPU was invented by NVIDIA back in 1999 as a way to quickly render computer graphics by offloading the computational burden from the CPU. A great deal has happened since then—GPUs are now enablers for leading edge deep learning, scientific research, design, and “fast data” querying startups that have ambitions of changing the world.

That’s because GPUs are very efficient at manipulating computer graphics, image processing, and other computationally intensive high performance computing (HPC) applications. Their highly parallel structure makes them more effective than general purpose CPUs for algorithms where the processing of large blocks of data is done in parallel. GPUs, capable of handling multiple calculations at the same time, also have a major performance advantage. This is the reason SoftLayer (now part of IBM Cloud) has brought these capabilities to a broader audience.

We support the NVIDIA Tesla Accelerated Computing Platform, which makes HPC capabilities more accessible to, and affordable for, everyone. Companies like Artomatix and MapD are using our NVIDIA GPU offerings to achieve unprecedented speed and performance, traditionally only achievable by building or renting an HPC lab.

By provisioning SoftLayer bare metal servers with cutting-edge NVIDIA GPU accelerators, any business can harness the processing power needed for HPC. This enables businesses to manage the most complex, compute-intensive workloads—from deep learning and big data analytics to video effects—using affordable, on-demand computing infrastructure.

Take a look at some of the groundbreaking results companies like MapD are experiencing using GPU-enabled technology running on IBM Cloud. They’re making big data exploration visually interactive and insightful by using NVIDIA Tesla K80 GPU accelerators running on SoftLayer bare metal servers.

SoftLayer has also added the NVIDIA Tesla M60 GPU to our arsenal. This GPU technology enables clients to deploy fewer, more powerful servers on our cloud while being able to churn through more jobs. Specifically, running server simulations are cut down from weeks or days to hours when compared to using a CPU-only based server—think of performance running tools and applications like Amber for molecular dynamics, Terachem for quantum chemistry, and Echelon for oil and gas.

The Tesla M60 also speeds up virtualized desktop applications. There is widespread support for running virtualized applications such as AutoCAD to Siemens NX from a GPU server. This allows clients to centralize their infrastructure while providing access to the application, regardless of location. There are endless use cases with GPUs.

With this arsenal, we are one step closer to offering real supercomputing performance on a pay-as-you-go basis, which makes this new approach to tackling big data problems accessible to customers of all sizes. We are at an interesting inflection point in our industry, where GPU technology is opening the door for the next wave of breakthroughs across multiple industries.

-Jerry Gutierrez

May 17, 2016

New routes configured for SoftLayer customers

Customers will see a new route configured on a newly provisioned customer host or on a customer host after a portal-initiated OS reload. This is part of a greater goal to enable new services and offerings for SoftLayer customers. This route will direct traffic addressed to hosts configured out of the 161.26.0.0/16 network block (161.26.0.0 -161.26.255.255) to the back end private gateway IP address configured on customer servers or virtual server instances.

The 161.2.0.0/16 address space is assigned to SoftLayer by IANA and will not be advertised over the front end public network. This space will be used exclusively on SoftLayer’s backend private network, will never conflict with network addresses on the Internet, and should never conflict with address space used by third-party VPN service providers.

This new route is similar to the 10.0.0.0/8 route already located on SoftLayer hosts, in that SoftLayer services are addressed out of both ranges. Also, both the 10.0.0.0/8 route and the 161.26.0.0/16 route will need to be configured on a customer host if it is required to access all SoftLayer services hosted on the back end private network. Unlike the 10.0.0.0/8 range, the 161.26.0.0/16 range will be used exclusively for SoftLayer services. Customers will need to ensure that ACL/firewalls on customer servers, virtual server instances, and gateway appliances are configured to allow connectivity to the 161.26.0.0/16 network block to access these new services.

For more information on this new route, including how to configure existing systems to use them, read more on KnowledgeLayer.

-Curtis

May 5, 2016

Everything you need to know about IBM POWER8 on SoftLayer

SoftLayer provides industry-leading cloud Infrastructure as a Service from a growing number of data centers around the world. To enable clients to draw critical insights and make better decisions faster, now there’s even more good news—customers and partners can use and rely on the secure, flexible, and open platform of IBM POWER Systems, which have just become available in SoftLayer’s DAL09 data center.

POWER8 servers are built with a processor designed and optimized specifically for big data workloads combining compute power, cutting-edge memory bandwidth, and I/O in ways that result in increased levels of performance, resiliency, availability, and security.

IBM POWER systems were designed to run many of the most demanding enterprise applications, industry-specific solutions, relational database management systems, and high performance computing environments. POWER8 servers are an ideal system for Linux and support a vast ecosystem of OpenSource, ISV, and IBM SW Unit products, giving clients a single, industry-leading open architecture (IBM POWER) in which to store, retrieve, and derive value from the “gold mine” of next generation applications.

The new POWER8 servers available from SoftLayer offer an optimal hybrid cloud infrastructure to test new Linux workloads in a secure and isolated cloud environment with reduced risk. As clients explore newer use cases like advanced analytics, machine learning, and cognitive computing against the combination of vast amounts of both structured and unstructured data, POWER8 and SoftLayer are in a unique position to accelerate client value. This new offering will also continue to leverage the rapidly expanding community of developers contributing to the OpenPOWER ecosystem as well as thousands of independent software vendors that support Linux on Power applications.

With the explosive growth of both structured and unstructured data, it requires businesses to derive insights and change faster than ever to keep pace. The cloud enables you to do just that. Our new and unique solution pairs SoftLayer’s Network-Within-a-Network topology for true out-of-band access, an easy-to-use customer portal, and robust APIs for full remote access of all product and service management options—with the unique high performance technology from IBM POWER8 to help accelerate the creation and delivery of the next generation of IT solutions.    

For more details, visit our POWER8 servers page.

 

-Chuck Calio,  IBM Power Systems Growth Solution Specialist

Categories: 
Keywords:
Categories:
April 4, 2016

A deeper dive into using VMware on SoftLayer

 

IBM and VMware recently announced an expanded global strategic partnership that enables customers to operate a seamless and consistent cloud, spanning hybrid environments. VMware customers now have the ability to quickly provision new (or scale existing) VMware workloads to IBM Cloud. This helps companies retain the value of their existing VMware-based solutions while leveraging the growing footprint of IBM Cloud data centers worldwide.

IBM customers are now able to purchase VMware software in a flexible, cost-efficient manner to power their deployments on IBM’s bare metal hardware infrastructure service. They’ll also be able to take advantage of their existing skill sets, tools, and technologies versus having to purchase and learn new ones. New customers will have complete control of their VMware environment, allowing them to expand into new markets and reduce startup cost by leveraging SoftLayer’s worldwide network and data centers.

This new offering also allows customers access to the full stack of VMware products to build an end-to-end VMware solution that matches their current on-premises environment or create a new one. Leveraging NSX lets customers manage their SoftLayer network infrastructure and extend their on-premises environment into SoftLayer as well, letting them expand their current capacity while reducing startup capital.

Customers can currently purchase vSphere Enterprise Plus 6.0 from SoftLayer. The VMware software components in Table 1 will be available, for a la carte purchase, for individual SoftLayer bare metal servers by Q2 2016. All products listed will be billed on a per socket basis.

Table 1: VMware software components

Product Name

Version

Charge per

VMware vRealize Operations Enterprise Edition

6.0

CPU

VMware vRealize Operations Advanced Edition

6.0

CPU

VMware vRealize Operations Standard Edition

6.0

CPU

VMware vRealize Log Insight

3.0

CPU

VMware NSX-V

6.2

CPU

VMware Integrated OpenStack (VIO)

2.0

CPU

Virtual SAN Standard Tier I (0-20TB)

6.X

CPU

Virtual SAN Standard Tier II (21-64TB)

6.X

CPU

Virtual SAN Standard Tier III (65-124TB)

6.X

CPU

VMware Site Recovery Manager

6.1

CPU

VMware vRealize Automation Enterprise

6.X

CPU

VMware vRealize Automation Advanced

6.X

CPU

 

The following FAQs will help you better understand the IBM and VMware partnership:                                                                                                                                             

Q: What are you offering today? And how much does it cost?

A: Today, IBM offers vSphere Enterprise Plus 6.0, which includes vCenter and vCloud Connector. It’s currently available for $85 per CPU for single CPU, dual CPU, and quad CPU servers. The products listed in Table 1 will be available in Q2 2016.

Q: Is per-CPU pricing a change from how VMware software was offered before?

A: Yes, the CPU-based pricing is new, and is unique to IBM Cloud. IBM is currently the only cloud provider to offer this type of pricing for VMware software. CPU-based pricing allows customers to more accurately budget how much they spend for VMware software in the cloud.

Q: Can customers bring the licenses they already own and have acquired via an existing VMware license agreement (e.g., ELA)?

A: Customers can take advantage of the new pricing when purchasing the VMware software through the SoftLayer portal. Please contact your VMware sales representative to get approval if you plan on bringing the license you already own to IBM Cloud.

Q: Will you offer migration services?

A: Yes, migration services will be among the portfolio of managed services offerings we will make available. Details will be announced closer to the time of availability, which is later in 2016.

Q: What storage options are available for VMware environments on SoftLayer?

A: Customers can select from a diverse range of SoftLayer storage offerings and custom solutions depending on their requirements and preferences. Use the Select a Storage Option to use with VMware guide to determine the best storage option for your environment.

Q: Where can I find technical resources to learn more about VMware on SoftLayer?

A: There is extensive technical documentation available on KnowledgeLayer, including:

 

-Kerry Staples and Andreas Groth

March 24, 2016

future.ready(): 7 Things to Check Off Your Big Data Development List

Frank Ketelaars, Big Data Technical Leader for Europe at IBM, offers a checklist that every developer should have pinned to their board when starting a big data project. Editor’s Note: Does your brain switch off when you hear industryspeak words like “innovation,” “transformation,” “leading edge,” “disruptive,” and “paradigm shift”? Go on, go ahead and admit it. Ours do, too. That’s why we’re launching the future.ready() series—consisting of blogs, podcasts, webinars, and Twitter chats— with content created by developers, for developers. Nothing fluffy, nothing buzzy. With the future.ready() series, we aim to equip you with tools and knowledge that you can use—not just talk and tweet about.

For the first edition, I’ve invited Frank Ketelaars, an expert in high volume data space, to walk us through seven things to check off when starting a big data development project.

-Michalina Kiera, SoftLayer EMEA senior marketing manager

 

This year, big data moves from a water cooler discussion to the to-do list. Gartner estimates that more than 75 percent of companies are investing or planning to invest in big data in the next two years.

I have worked on multiple high volume projects in industries that include banking, telecommunications, manufacturing, life sciences, and government, and in roles including architect, big data developer, and streaming analytics specialist. Based on my experience, here’s a checklist I put together that should give developers a good start. Did I miss anything? Join me on the Twitter chat or webinar to share your experience, ask questions, and discuss further. (See details below.)     

1. Team up with a person who has a budget and a problem you can solve.

For a successful big data project, you need to solve a business problem that’s keeping somebody awake at night. If there isn’t a business problem and a business owner—ideally one with a budget— your project won’t get implemented. Experimentation is important when learning any new technology. But before you invest a lot of time in your big data platform, find your sponsor. To do so, you’ll need to talk to everyone, including IT, business users, and management. Remember that the technical advantages of analytics at scale might not immediately translate into business value.

2. Get your systems ready to collect the data.

With additional data sources, such as devices, vehicles, and sensors connected to networks and generating data, the variety of information and transportation mechanisms has grown dramatically, posing new challenges for the collection and interpretation of data.

Big data often comes from sources outside the business. External data comes at you in a variety of formats (including XML, JSON, and binary), and using a variety of different APIs. In 2016, you might think that everyone is on REST and JSON, but think again: SOAP still exists! The variety of the data is the primary technical driver behind big data investments, according to a survey of 402 business and IT professionals by management consultancy NewVantage Partners[SM1] . From one day to the next, the API might change or a source might become unavailable.

Maybe one day we’ll see more standardization, but it won’t happen any time soon. For now, developers must plan to spend time checking for changes in APIs and data formats, and be ready to respond quickly to avoid service interruptions. And to expect the unexpected.

3. Make sure you have the right to use that data.

Governance is a business challenge, but it’s going to touch developers more than ever before—from the very start of the project. Much of the data they will be handling is unstructured, such as text records from a call center. That makes it hard to work out what’s confidential, what needs to be masked, and what can be shared freely with external developers. Data will need to be structured before it can be analyzed, but part of that process includes working out where the sensitive data is, and putting measures in place to ensure it is adequately protected throughout its lifecycle.

Developers need to work closely with the business to ensure that they can keep data safe, and provide end users with a guarantee that the right data is being analyzed and that its provenance can be trusted. Part of that process will be about finding somebody who will take ownership of the data and attest to its quality.

4. Pick the right tools and languages.

With no real standards in place yet, there are many different languages and tools used to collect, store, transport, and analyze big data. Languages include R, Python, Julia, Scala, and Go (plus the Java and C++ you might need to work with your existing systems). Technologies include Apache Pig, Hadoop, and Spark, which provide massive parallel processing on top of a file system without Hadoop. There’s a list of 10 popular big data tools here, another 12 here, and a round-up of 45 big data tools here. 451 Research has created a map that classifies data platforms according to the database type, implementation model, and technology. It’s a great resource, but its 18-color key shows how complex the landscape has become.

Not all of these tools and technologies will be right for you, but they hint at one way the developer’s core competency must change. Big data will require developers to be polyglots, conversant in perhaps five languages, who specialize in learning new tools and languages fast—not deep experts in one or two languages.

Nota bene: MapReduce and Pig are among the top highest paid technology skills in the US, and other big data skills are likely to be highly sought-after as the demand for them also grows. Scala is a relatively new functional programming language for data preparation and analysis, and I predict it will be in high demand in the near future.

5. Forget “off-the-shelf.” Experiment and set up a big data solution that fits your needs. 

You can think of big data analytics tools like Hadoop as a car. You want to go to the showroom, pay, get in, and drive away. Instead, you’re given the wheels, doors, windows, chassis, engine, steering wheel, and a big bag of nuts and bolts. It’s your job to assemble it.

As InfoWorld notes, DevOps tools can help to create manageable Hadoop solutions. But you’re still faced with a lot of pieces to combine, diverse workloads, and scheduling challenges.

When experimenting with concepts and technologies to solve a certain business problem, also think about successful deployment in the organization. The project does not stop after the proof.

6. Secure resources for changes and updates.

Apache Hadoop and Apache Spark are still evolving rapidly and it is inevitable that the behavior of components will change over time and some may get deprecated shortly after initial release. Implementing new releases will be painful, and developers will need to have an overview of the big data infrastructure to ensure that as components change, their big data projects continue to perform as expected.

The developer team must plan time for updates and deprecated features, and a coordinated approach will be essential for keeping on top of the change.

7. Use infrastructure that’s ready for CPU and I/O intensive workloads.

My preferred definition of big data (and there are many – Forbes found 12) is this: "Big data is when you can no longer afford to bring the data to the processing, and you have to do the processing where the data is."

In traditional database and analytics applications, you get the data, load it onto your reporting server, process it, and post the results to the database.

With big data, you have terabytes of data, which might reside in different places—and which might not even be yours to move. Getting it to the processor is impractical. Big data technologies like Hadoop are based on the concept of data locality—doing the processing where the data resides.

You can run Hadoop in a virtualized environment. Virtual servers don’t have local data, though, so the time taken to transport data between the SAN or other storage device and the server hurts the application’s performance. Noisy neighbors, unpredictable server speeds and contested network connections can have a significant impact on performance in a virtualized environment. As a result, it’s difficult to offer service level agreements (SLAs) to end users, which makes it hard for them to depend on your big data implementations.

The answer is to use bare metal servers on demand, which enable you to predict and guarantee the level of performance your application can achieve, so you can offer an SLA with confidence. Clusters can be set up quickly, so you can accelerate your project really fast. Because performance is predictable and consistent, it’s possible to offer SLAs to business owners that will encourage them to invest in the big data project and rely on it for making business decisions.

How can I learn more?

Join me in the Twitter chat and webinar (details below) to discuss how you’re addressing big data or have your questions answered by me and my guests.  

Add our Twitter chat to your calendar. It happens Thursday, March 31 at 1 p.m. CET. Use the hashtag #SLdevchat to share your views or post your questions to me.

Register for the webinar on Wednesday, Apr 20, at 5 p.m. to 6 p.m. CET.

 

About the author

Frank Ketelaars has been Big Data Technical Leader in Europe for IBM since August 2013. As an architect, big data developer, and streaming analytics specialist, he has worked on multiple high volume projects in banking, telecommunications, manufacturing, life sciences and government. He is a specialist in Hadoop and real-time analytical processing.


 

January 6, 2016

Do You Speak SoftLayer Object Storage?

So you’ve made the decision to utilize object storage at SoftLayer. Great! But are you and your applications fluent in object storage? Do you know how to transfer data to SoftLayer object storage as well as modify and delete objects? How about when to use APIs and when to use storage gateways? If not, you’re not alone.

We’ve found that most IT professionals understand the difference between “traditional” (i.e., file and block) storage and object storage. They have difficulty, however, navigating the methods to interact with SoftLayer’s object storage service that is based on OpenStack Swift. This is understandable because traditional storage systems expose volumes and or shares that can be mounted and consumed via iSCSI, NFS, or SMB protocols.

That’s not the case with object storage, including the object storage service offered by SoftLayer. Data is only accessed via the use of REST APIs and language bindings, third-party applications supporting SFTP, the SoftLayer customer portal, or via storage gateways.

The solutions are outlined below, including guidance on when to utilize each access method. Figure 1 provides a high level overview of the available options and their purpose.



Figure 1: Object storage data access methods

REST APIs and Language Bindings
The first and possibly most flexible method to access SoftLayer object storage is via REST APIs and language bindings. These APIs and bindings give you the ability to interact with SoftLayer object storage via command line or programmatically. As a result, you can create scripts to perform a file upload, download certain objects, and modify metadata related to the object. Additionally, the current support for PHP, Java, Ruby, and Python bindings give application developers the flexibility to support SoftLayer object storage in their applications.

While this method is flexible in terms of capabilities, it does assume the user has knowledge and experience writing scripts, programs, and applications. REST APIs and language bindings aren’t the best methods for IT organizations that want to integrate existing environment backup, archive, and disaster recovery solutions. These solutions typically require traditional storage mount points, which REST APIs and language bindings don’t provide.

Third-Party Applications
The second method is to use third-party applications that support SFTP. This method abstracts the use of REST APIs and gives users the ability to upload, download, and delete objects via a GUI. However, you won’t have the ability to modify metadata when using an SFTP client. Additionally, third-party applications have a 5GB upload limit placed on each object by SoftLayer and OpenStack Swift. If an object greater than 5GB needs to be uploaded, you have to follow the OpenStack method of creating large objects on object storage to assure successful and efficient object upload. Unless you’re comfortable with this methodology, it’s strongly recommended that you use either the REST APIs or storage gateway solutions to access files over 5GB.

SoftLayer Customer Portal
The third method to access SoftLayer object storage is to simply use the SoftLayer customer portal. By using the portal, you have the ability to add containers, add files to containers, delete files from containers, modify metadata, and enable CDN capabilities. As with the SFTP method of accessing the object store, you can upload an unlimited number of files as long as each file does not exceed 20MB in size. Also, there is no bulk upload option within the customer portal; users must select and upload on a per-file basis. While using the portal is simple, it does provide some limitations and is best for users only wanting to upload a few files that occupy 20MB or less.

Storage Gateways
The last method to access and utilize SoftLayer object storage is storage gateways. Unlike other methods, storage gateways are unique. They’re able to expose traditional storage protocols like iSCSI, NFS, CIFS, and SMB and translate the read/write/modify commands into REST API calls against the object storage service. As a result, these devices offer an easier path to consume SoftLayer object storage for businesses looking to integrate their on-premises environment with the cloud. Some storage gateways also have the ability to compress, deduplicate, and encrypt data in-flight and at-rest. Storage gateways work best with organizations looking to integrate existing applications requiring traditional storage access methods (like backup software) with object storage or to securely transfer and store data to cloud object storage.

Summary
While there are many methods to access SoftLayer object storage, it’s important that you select an option that best meets your requirements relating to data access, security, and integration. For example, if you’re writing an application that requires object storage, you would most likely choose to interact with object storage via REST APIs or use language bindings. Or, if you simply need to integrate existing applications in your environment to cloud object storage, storage gateway would be the best option. In all cases, make sure you can meet your requirements with the appropriate method.

Table 1 lists sample requirements and shows whether each option meets the requirements. Use it to help you with your decision making process:



Table 1: Decision making tool

Click here for more information about SoftLayer’s object storage service and click here for FAQs on object storage.

Click here for information about SoftLayer’s REST-APIs and language bindings.

-Daniel De Araujo & Naeem Altaf

Categories: 
November 19, 2015

SoftLayer and Koding join forces to power a Global Virtual Hackathon


This guest blog post is written by Cole Fox, director of partnerships at Koding.

Koding is excited to partner with SoftLayer on its upcoming Global Virtual Hackathon, happening December 12–13, 2015. The event builds on last year’s Hackathon, where more than 60,000 developers participated from all over the world. The winners took home over $35,000 in prizes! This year, we’ve upped the ante to make the event even larger than the last time: the winner will take home a $100,000 grand prize.

“We are working with Koding for this virtual hackathon as part of our commitment to promote open source technology and support the talented community of developers who are dispersed all over the globe,” said Sandy Carter, general manager of Cloud Ecosystem and Developers at IBM. “Cloud-based open source development platforms like Koding make it easier to get software projects started, and hackathons are a great place to show how these kinds of platforms make software development easier and more fun.”


Why a virtual hackathon?
Hackathons are awesome. They allow developers to solve problems in a very short amount of time. The challenge with traditional hackathons is that they require you to be physically present in a room. With more and more of our lives moving online, why be tied to a physical location to solve problems? Virtual hackathons allow talented individuals from all over the world to participate, collaborate, and showcase their skills, regardless of their physical location. Our Global Virtual Hackathon levels the playing field.

Who won last year?
Educational games, especially those that teach programming, were popular to build—and a few actually won! Want to see what the winners built? Click here to check out a fun yet effective game teaching students to program. Learn more about the team of developers and see their code here. Last year, nine winners across three categories took home a prize. To see a list of last year’s winners, see the blog post here.

Tips to be successful and win this year
Here’s some motivation for you: the grand prize is $100,000. (That’s seed capital for your startup idea!)

So how do you win? First and foremost, apply now! Then talk to some friends and maybe even team up. You can also use Koding to find teammates once you’re accepted. Teammates aren’t a requirement but can definitely make for a fun experience and improve your chances of making something amazing.

Once you’re in, get excited! And be sure to start thinking about what you want to build around this year’s themes.

And the 2015 themes are…
Ready to build something and take home $100,000? Here are this year’s themes:

  • Data Visualization
    Data is everywhere, but how can we make sense of it? Infographics and analytics can bring important information to light that wasn’t previously accessible when stuck in a spreadsheet or database. We challenge you to use some of the tools out there to help articulate some insights.
  • Enterprise Productivity
    The workplace can always be improved and companies are willing to pay a lot of money for great solutions. Build an application that helps employees do their jobs better and you could win big.
  • Educational Games
    Last year’s winning team, WunderBruders, created an educational game. But games aren’t just for children. Studies have shown that games not only improve motor skills, but they are also a great way to learn something new.

Wait a second. What is Koding anyway?
In short, Koding is a developer environment as a service. The Koding platform provides you with what you need to move your software development to the cloud. Koding’s cloud-based software development service provides businesses with the ability to formulate the most productive, collaborative, and efficient development workflows. Businesses, both small and large, face three common challenges: on-boarding new team members, workflow efficiency, and knowledge retention. These pain points impact companies across all industries, but for companies involved in software development, these are often the most expensive and critical problems that continue to remain unresolved. Koding was built to tackle these inefficiencies head on. Learn more about Koding for Teams.

Can I use my SoftLayer virtual servers with Koding?
Koding’s technical architecture is very flexible. If you have a SoftLayer virtual server, you can easily connect it to your Koding account. The feature is described in detail here.

Think you can hack it? APPLY NOW!

-Cole Fox

Subscribe to Author Archive: %