sales

May 27, 2016

Data Security and Encryption in the Cloud

In Wikipedia’s words, encryption is the process of encoding messages or information in such a way that only authorized parties can read it. On a daily basis, I meet customers from various verticals. Whether it is health care, finance, government, technology, or any other public or privately held entity, they all have specific data security requirements. More importantly, the thought of moving to a public cloud brings its own set of challenges around data security. In fact, data security is the biggest hurdle when making the move from a traditional on-premises data center to a public cloud.

One of the ways to protect your data is by encryption. There are a few ways to encrypt data, and they all have their pros and cons. By the end of this post, you will hopefully have a better understanding of the options available to you and how to choose one that meets your data security requirements.

Data “At Rest” Encryption

At rest encryption refers to the encryption of data that is not moving. This data is usually stored on hardware such as local disk, SAN, NAS, or other portable storage devices. Regardless of how the data gets there, as long as it remains on that device and is not transferred or transmitted over a network, it is considered at rest data.

There are different methodologies to encrypt at rest data. Let’s look at the few most common ones:

Disk Encryption: This is a method where all data on a particular physical disk is encrypted. This can be done by using SED (self-encrypting disk) or using a third party solutions from vendors like Vormetric, SafeNet, PrimeFactors, and more. In a public cloud environment, your data will most likely be hosted on a multitenant SAN infrastructure, so key management and the public cloud vendor’s ability to offer dedicated, local, or SAN spindles becomes critical. Moreover, keep in mind that using this encryption methodology does not protect data when it leaves the disk. This method may also be more expensive and may add management overhead. On the other hand, disk encryption solutions are mostly operating system agnostic, allowing for more flexibility.

File Level Encryption: File level encryption is usually implemented by running a third-party application within the operating system to encrypt files and folders. In many cases, these solutions create a virtual or a logical disk where all files and folders residing in it are encrypted. Tools like VeraCrypt (TrueCrypt’s successor), BitLocker, and 7-Zip are a few examples of file encryption software. These are very easy to implement and support all major operating systems.  

Data “In Flight” Encryption

Encrypting data in flight involves encrypting the data stream at one point and decrypting it at another point. For example, if you replicate data across two data centers and want to ensure confidentiality of this exchange, you would use data in flight encryption to encrypt the data stream as it leaves the primary data center, then decrypt it at the other end of the cable at the secondary data center. Since the data exchange is very brief, the keys used to encrypt the frames or packets are no longer needed after the data is decrypted at the other end so they are discarded—no need to manage these keys. Most common protocols used for in flight data encryption are IPsec VPN and TLS/SSL.

And there you have it. Hopefully by now you have a good understanding of the most commonly encryption options available to you. Just keep in mind that more often than not, at rest and in flight encryption are implemented in conjunction and complement each other. When choosing the right methodology, it is critical to understand the use case, application, and compliance requirements. You would also want to make sure that the software or the technology you chose adheres to the highest level of encryption standards, such as 3DES, RSA, AES, Blowfish, etc.

-Zeb Ahmed

May 24, 2016

Streamlining the VMware licenses ordering process

IBM and VMware’s agreement (announced in February) enables enterprise customers to extend their existing on-premises workloads to the cloud—specifically, the IBM Cloud. Customers can now leverage VMware technologies with IBM’s worldwide cloud data centers, giving them the power to scale globally without incurring CAPEX and reducing security risks.

So what does this mean to customers’ VMware administrators? They can quickly realize cost-effective hybrid cloud characteristics by deploying into SoftLayer’s enterprise-grade global cloud platform (VMware@SoftLayer). One of these characteristics is that vSphere workloads and catalogs can be provisioned onto VMware vSphere environments within SoftLayer's data centers without modification to VMware VMs or guests. The use of a common vSphere hypervisor and management/orchestration platform make these deployments possible.

vSphere implementations on SoftLayer also enable utilization of other components. Table 1 contains a list of VMware products that are now available for ordering through the SoftLayer customer portal. Note that prices are subject to change. Visit VMware Solutions for the most current pricing.

Product Name

Customer List Price

VMware vCenter Server Standard

Included with vSphere

VMware vSphere Enterprise Plus

Starting at $85 per processor per month

VMware vRealize Suite [Includes VMware vRealize (Standard Edition), vRealize Log Insight, and vRealize Automation (Standard Edition)]

Starting at $48 per processor per month

VMware vRealize Operations Enterprise Edition

Starting at $68 per processor per month

VMware vRealize Operations Advanced Edition

Starting at $33 per processor per month

VMware vRealize Automation Enterprise

Starting at $150 per processor per month

VMware vRealize Automation Advanced

Starting at $75 per processor per month

VMware NSX-V

Starting at $118 per processor per month

VMware Integrated OpenStack (VIO)

Starting at $11 per processor per month

Virtual SAN Standard Tier 1 (0-20 TB)

Contact SoftLayer Sales for pricing

Virtual SAN Standard Tier 2 (21-64 TB)

Contact SoftLayer Sales for pricing

Virtual SAN Standard Tier 3 (65-124 TB)

Contact SoftLayer Sales for pricing

VMware Site Recovery Manager (SRM)

Starting at $257 per processor per month

Table 1. VMware products available in the SoftLayer Customer Portal

Use the following steps to order licenses for the VMware products listed in Table 1:

  1. Log in to the SoftLayer customer portal.
  2. Click Devices > Managed > VMware Licenses.

Steps to VMware Licenses page

Figure 1. Steps to VMware Licenses page

  1. Click on Order VMware Licenses in the top right-hand corner of the VMware Licenses page.

Order VMware Licenses

Figure 2. Order VMware Licenses

  1. To list the VMware products and number of CPUs for the licenses you want to order (Figure 3), click the drop-down list under Clicking Add License.

Note: VMware vSphere Enterprise Plus (ESXi 6.0) cannot be ordered through this process. You must still order it as a requested OS when you order your bare metal server.



Select the VMware product and number of CPUs

Figure 3. Select the VMware product and number of CPUs

  1. View the price of the VMware product you selected on the far right of the screen.

View your selection before continuing the ordering process

Figure 4. View your selection before continuing the ordering process

  1. Click Continue to order the licenses or you can click Add License to add additional licenses.

Once you click Continue, you are taken back to the VMware Licenses page, which displays your VMware product(s) and license key(s).

List of VMware products and license keys

Figure 5. List of VMware products and license keys

  1. Download the Install Files from the link on this page. You will need to have an SSL connection to the SoftLayer private network to be able to access the download page.
  2. Download the correct VMware product(s) and manually install them into your vSphere environment.

 

- Kerry Staples

Categories: 
May 19, 2016

Bringing the power of GPUs to cloud

The GPU was invented by NVIDIA back in 1999 as a way to quickly render computer graphics by offloading the computational burden from the CPU. A great deal has happened since then—GPUs are now enablers for leading edge deep learning, scientific research, design, and “fast data” querying startups that have ambitions of changing the world.

That’s because GPUs are very efficient at manipulating computer graphics, image processing, and other computationally intensive high performance computing (HPC) applications. Their highly parallel structure makes them more effective than general purpose CPUs for algorithms where the processing of large blocks of data is done in parallel. GPUs, capable of handling multiple calculations at the same time, also have a major performance advantage. This is the reason SoftLayer (now part of IBM Cloud) has brought these capabilities to a broader audience.

We support the NVIDIA Tesla Accelerated Computing Platform, which makes HPC capabilities more accessible to, and affordable for, everyone. Companies like Artomatix and MapD are using our NVIDIA GPU offerings to achieve unprecedented speed and performance, traditionally only achievable by building or renting an HPC lab.

By provisioning SoftLayer bare metal servers with cutting-edge NVIDIA GPU accelerators, any business can harness the processing power needed for HPC. This enables businesses to manage the most complex, compute-intensive workloads—from deep learning and big data analytics to video effects—using affordable, on-demand computing infrastructure.

Take a look at some of the groundbreaking results companies like MapD are experiencing using GPU-enabled technology running on IBM Cloud. They’re making big data exploration visually interactive and insightful by using NVIDIA Tesla K80 GPU accelerators running on SoftLayer bare metal servers.

SoftLayer has also added the NVIDIA Tesla M60 GPU to our arsenal. This GPU technology enables clients to deploy fewer, more powerful servers on our cloud while being able to churn through more jobs. Specifically, running server simulations are cut down from weeks or days to hours when compared to using a CPU-only based server—think of performance running tools and applications like Amber for molecular dynamics, Terachem for quantum chemistry, and Echelon for oil and gas.

The Tesla M60 also speeds up virtualized desktop applications. There is widespread support for running virtualized applications such as AutoCAD to Siemens NX from a GPU server. This allows clients to centralize their infrastructure while providing access to the application, regardless of location. There are endless use cases with GPUs.

With this arsenal, we are one step closer to offering real supercomputing performance on a pay-as-you-go basis, which makes this new approach to tackling big data problems accessible to customers of all sizes. We are at an interesting inflection point in our industry, where GPU technology is opening the door for the next wave of breakthroughs across multiple industries.

-Jerry Gutierrez

May 17, 2016

New routes configured for SoftLayer customers

Customers will see a new route configured on a newly provisioned customer host or on a customer host after a portal-initiated OS reload. This is part of a greater goal to enable new services and offerings for SoftLayer customers. This route will direct traffic addressed to hosts configured out of the 161.26.0.0/16 network block (161.26.0.0 -161.26.255.255) to the back end private gateway IP address configured on customer servers or virtual server instances.

The 161.2.0.0/16 address space is assigned to SoftLayer by IANA and will not be advertised over the front end public network. This space will be used exclusively on SoftLayer’s backend private network, will never conflict with network addresses on the Internet, and should never conflict with address space used by third-party VPN service providers.

This new route is similar to the 10.0.0.0/8 route already located on SoftLayer hosts, in that SoftLayer services are addressed out of both ranges. Also, both the 10.0.0.0/8 route and the 161.26.0.0/16 route will need to be configured on a customer host if it is required to access all SoftLayer services hosted on the back end private network. Unlike the 10.0.0.0/8 range, the 161.26.0.0/16 range will be used exclusively for SoftLayer services. Customers will need to ensure that ACL/firewalls on customer servers, virtual server instances, and gateway appliances are configured to allow connectivity to the 161.26.0.0/16 network block to access these new services.

For more information on this new route, including how to configure existing systems to use them, read more on KnowledgeLayer.

-Curtis

May 11, 2016

Adventures in Bluemix: Migrating to MQ Light

One of my pet projects at SoftLayer is looking at a small collection of fancy scripts that scan through all registered Internet domain names to see how many of them are hosted on SoftLayer’s infrastructure. There are a lot of fun little challenges involved, but one of the biggest challenges is managing the distribution of work so that this scan doesn’t take all year. Queuing services are great for task distribution, and for my initial implementation I decided to give running a RabbitMQ instance a try, since at the time it was the only queuing service I was familiar with. Overall, it took me about a week and one beefy server to go from “I need a queue,” to “I have a queue that is actually doing what I need it to.”

While what I had set up worked, looking back, there is a lot about RabbitMQ that I didn’t really have the time to figure out properly. Around the time I finished the first run of this project, Bluemix announced that its MQLight service would allow connections from non-Bluemix resources. So when I got some free time, I decided to move the project to a Bluemix-hosted MQ Light queue, and take some notes on how the migration went.

Project overview

To better understand how much work was involved, let me quickly explain how the whole “scanning through every registered domain for SoftLayer hosted domains” thing works.

There are three main moving parts in the project:

  1. The Parser, which is responsible for reading through zone files (which are obtained from the various registrars), filtering out duplicates, and putting nicely formatted domains into a queue.
  2. The Resolver, which is responsible from taking the nicely formatted domains from queue #1, looking up the domain’s IP address, and putting the result into queue #2.
  3. The Checker, which takes the domains from queue #2, checks to see if the domains’ IPs belong to SoftLayer or not, and saves the result in a database.

Each queue entry is a package of about 500 domains, which is roughly 200Kb of text data consisting of the domain and some meta-data that I used to see how well everything was performing. There are around 160 million domains I need to review, and resolving a single domain can take anywhere from .001 seconds to four seconds, so being able to push domains quickly through queues is very important.

Things to be aware of

Going into this migration, I made a lot of assumptions about how things worked that caused me grief. So if you are in a similar situation, here is what I wish someone had told me.

AMQP 1.0: MQLight implements the AMQP 1.0 protocol, which is great, because it is the newest and greatest. As everyone knows, newer is usually better. The problem is that my application was using the python-pika library to connect to RabbitMQ, both of which implement AMQP 0.9, which isn’t fully compatible with AMQP 1.0. The Python library I was using gave me a version error when trying to connect to MQ Light. This required a bit of refactoring of my code in order to get everything working properly. The core ideas are the same, but some of the specific API calls are slightly different.

Persistence: Messages sent to a MQ Light queue without active subscribers will be lost, which took me a while to figure out. The UI indicates when this happens, so this is likely just a problem of me not reading the documentation properly and assuming MQ Light worked like RabbitMQ.



Messages sent to a MQLight queue without active subscribers will be lost.

Threads: The python-mqlight library uses threads fairly heavily, which is great for performance, but it makes programming a little more thought intensive. Make sure you wait for the connection to initialize before sending any messages, and make sure all your messages have been sent in before exiting.

Apache Proton: MQ Light is built on the Apache Qpid Proton project, and the Python library python-mqlight also uses this.

Setting up MQ Light

Aside from those small issues I mentioned, MQ Light was really easy to set up and start using, especially when compared to running my own RabbitMQ instance.



MQLight was really easy to set up and start using, especially when compared to running my own RabbitMQ instance.

  1. Set up the MQ Light Service in Bluemix.
  2. Install the python-mqlight library (or whatever library supports your language of choice). There are a variety of MQ Light Libraries.
  3. Try the send/receive examples.
  4. Write some code.
  5. Watch the messages come in, and profit.

That’s all there is to it. As a developer, the ease with which I can set up services to try is one of the best things about Bluemix, with MQ Light making a great addition to its portfolio of services.

Some real numbers

After I re-factored my code to be able to use either the pika or python-mqlight libraries interchangeably, I ran a sample set of data through each library to see what impact they had on overall performance, and I was pleasantly surprised to see the results.

Doing a full run-through of all domains would take about seven hours, so I ran this test with only 10,364 domains. Below are the running times for each section, in seconds.

Local RabbitMQ

This server was running on a 4 core, 49G Ram VSI.

Parser: 0.054s

Resolver: 90.485s

Checker: 0.0027s

Bluemix MQLight

Parser: 1.593s

Resolver: 86.756s

Checker: 6.766s

Since I am using the free, shared tier of MQ Light, I was honestly expecting much worse results. Having only a few seconds increase in runtime was a really big win for MQ Light.

Overall, I was very pleased working with MQ Light, and I highly suggest it as a starting place for anyone wanting to check out queuing services. It was easy to set up, free to try out, and pretty simple once I started to understand the basics.

-Chris

Categories: 
May 5, 2016

Everything you need to know about IBM POWER8 on SoftLayer

SoftLayer provides industry-leading cloud Infrastructure as a Service from a growing number of data centers around the world. To enable clients to draw critical insights and make better decisions faster, now there’s even more good news—customers and partners can use and rely on the secure, flexible, and open platform of IBM POWER Systems, which have just become available in SoftLayer’s DAL09 data center.

POWER8 servers are built with a processor designed and optimized specifically for big data workloads combining compute power, cutting-edge memory bandwidth, and I/O in ways that result in increased levels of performance, resiliency, availability, and security.

IBM POWER systems were designed to run many of the most demanding enterprise applications, industry-specific solutions, relational database management systems, and high performance computing environments. POWER8 servers are an ideal system for Linux and support a vast ecosystem of OpenSource, ISV, and IBM SW Unit products, giving clients a single, industry-leading open architecture (IBM POWER) in which to store, retrieve, and derive value from the “gold mine” of next generation applications.

The new POWER8 servers available from SoftLayer offer an optimal hybrid cloud infrastructure to test new Linux workloads in a secure and isolated cloud environment with reduced risk. As clients explore newer use cases like advanced analytics, machine learning, and cognitive computing against the combination of vast amounts of both structured and unstructured data, POWER8 and SoftLayer are in a unique position to accelerate client value. This new offering will also continue to leverage the rapidly expanding community of developers contributing to the OpenPOWER ecosystem as well as thousands of independent software vendors that support Linux on Power applications.

With the explosive growth of both structured and unstructured data, it requires businesses to derive insights and change faster than ever to keep pace. The cloud enables you to do just that. Our new and unique solution pairs SoftLayer’s Network-Within-a-Network topology for true out-of-band access, an easy-to-use customer portal, and robust APIs for full remote access of all product and service management options—with the unique high performance technology from IBM POWER8 to help accelerate the creation and delivery of the next generation of IT solutions.    

For more details, visit our POWER8 servers page.

 

-Chuck Calio,  IBM Power Systems Growth Solution Specialist

Categories: 
Keywords:
Categories:
May 3, 2016

Make the most of Watson Language Translation on Bluemix

How many languages can you speak (sorry, fellow geeks; I mean human languages, not programming)?

Every day people across the globe depend more and more on the Internet for their day-to-day activities, increasing the need for software to support multiple languages to accommodate the growing diversity of its users. If you work developing software, this means it is only a matter of time before you get tasked to translate your applications.

Wouldn't it be great if you could learn something with just a few key strokes? Just like Neo in The Matrix when he learns kung fu. Well, wish no more! I'll show you how to teach your applications to speak in multiple languages with just a few key strokes using Watson’s Language Translation service, available through Bluemix. It provides on-the-fly translation between many languages. You pay only for what you use and it’s consumable through web services, which means pretty much any application can connect to it—and it's platform and technology agnostic!

I'll show you how easy it is to create a PHP program with language translation capabilities using Watson's service.

Step 1: The client.

You can write your own code to interact with Watson’s Translation API, but why should you? The work is already done for you. You can pull in the client via Composer, the de-facto dependency manager for PHP. Make sure you have Composer installed, then create a composer.json file with the following contents:

composer.json file



We will now ask Composer to install our dependency. Execute one of the following commands from your CLI:



Installing the dependency



After the command finishes, you should have a 'vendor' directory created.

 

Step 2: The credentials.

From Bluemix, add the Language Translation service to your application and retrieve its credentials from the application's dashboard (shown below).



From Bluemix, add the Language Translation service to your application and retrieve its credentials from the application's dashboard.



 

Step 3: Put everything together.

At the same level where the composer.json file was created in Step 1, create a PHP file named test.php with the following contents:

test.php file





Save the file, buckle up, and execute it from the command line:

Execute test.php

 

Voilà! Your application now speaks French!

Explore other languages Watson knows and other cool features available through Watson's Language Translation service.

 

-Sergio







 

April 26, 2016

Cloud. Ready-to-Wear.

It’s been five years since I started my journey with SoftLayer. And what a journey it has been—from being one of the first few folks in our Amsterdam office, to becoming part of the mega-family of IBMers; from one data center in Europe to six on this side of the pond and 40+ around the globe; from “Who is SoftLayer?” (or my favorite, “SoftPlayer”), to becoming a cloud environment fundamental for some of the biggest and boldest organizations worldwide.

But the most thrilling difference between 2016 and 2011 that I’ve been observing lately is a shift of the market’s perception of cloud, which matters are important to adopters, and the technology itself becoming mainstream.

Organizations of all sizes—small, medium, and large, while still raising valid questions around the level of control and security—are more often talking about challenges regarding managing the combined on-prem and shared environments, readiness of their legacy applications to migrate to cloud, and their staff competency to orchestrate the new architecture.

At Cloud Expo 2016 (the fifth one for the SoftLayer EMEA team), next to two tremendous keynotes given by Sebastian Krause, General Manager IBM Cloud Europe, and by Rashik Parmar, Lead IBM Cloud Advisor/Europe IBM Distinguished Engineer, we held a roundtable to discuss the connection between hybrid cloud and agile business. Moderated by Rashik Parmar, the discussion confirmed the market’s evolution: from recognizing cloud as technology still proving its value, to technology critical in gaining a competitive advantage in today’s dynamic economy.

Rashik’s guests had deep technology backgrounds and came from organizations of all sizes and flavors—banking, supply chain managements, ISV, publishing, manufacturing, MSP, insurance, and digital entertainment, to name a few. Most of them already have live cloud deployments, or they have one ready to go into production this year.

When it came to the core factors underlying a move into the cloud, they unanimously listed gaining business agility and faster time-to-market. For a few minutes, there was a lively conversation among the panelists about the cost and savings. They raised examples citing  poorly planned cloud implementations that were 20-30 percent more costly than keeping the legacy IT setup. Based on an example of a large Australian bank, Rashik urged companies to start the process of moving into cloud with a vigilant map of their own applications landscape before thinking about remodeling the architecture to accommodate cloud.

The next questions the panelists tackled pertained to the drivers behind building hybrid cloud environments, which included:

  • Starting with some workloads and building a business case based on their success; from there, expanding the solution organization-wide
  • Increasing the speed of market entry for new solutions and products
  • Retiring certain legacy applications on-prem, while deploying new ones on cloud
  • Regulatory requirements that demand some workloads or data to remain on-prem.

When asked to define “hybrid cloud,” Rashik addressed the highly ambiguous term by simply stating that it refers to any combination of software-defined environment and automation with traditional IT.

The delegates discussed the types of cloud—local, dedicated, and shared—and found it difficult to define who controls hybrid cloud, and who is accountable for what component when something goes wrong. There was a general agreement that many organizations still put physical security over the digital one, which is not entirely applicable in the world of cloud.

Rashik explored, from his experience, where most cases of migrating into cloud usually originate. He referred to usage patterns and how organizations become agile with hybrid IT. The delegates agreed that gaining an option of immediate burstability and removing the headache of optimal resource management, from hardware to internal talent, are especially important.

Rashik then addressed the inhibitors of moving into cloud—and here’s the part that inspired me to write this post. While mentions of security (data security and job security) and the control over the environment arose, the focus repeatedly shifted toward the challenges of applications being incompatible with cloud architecture, complicated applications landscape, and scarcity of IT professionals skilled in managing complex (hybrid) cloud environments.

This is a visible trend that demonstrates the market has left the cloud department store’s changing room, and ready not only to make the purchase, but “ready to wear” the new technology with a clear plan where, when, and with an aim to achieve specific outcomes.

The conversation ended with energizing insights about API-driven innovation that enables developers to assemble a wide spectrum of functions, as opposed to being “just a coder.” Other topics included cognitive computing that bridges digital business with digital intelligence, and platforms such as blockchain that are gaining momentum.

To think that not so long ago, I had to explain to the average Cloud Expo delegate what “IaaS” stand for. We’ve come a long way.

 

-Michalina

Keywords:
Categories:
April 6, 2016

Cloudocracy: Cedato believes in showing the right ad to the right viewer

In the latest edition of our Cloudocracy series—which celebrates SoftLayer customers shaking up their industries—meet Cedato. Have you noticed video ads appearing more often over non-video content online? SoftLayer customer Cedato makes that possible. We sat down with Dvir Doron, Cedato’s CMO, to learn more.

SOFTLAYER: There’s something we’ve always wondered about online video, so perhaps you can help us out. Why are there so many cat videos?

DVIR DORON: I’ll start with a confession: I’ve never uploaded a video of my pets, my children, or any of my hobbies. At the same time, I know I’m an anomaly. Most people want to share their lives, experiences, and happy moments. Cats capture that. We talk about user generated content, and cat and baby videos drove viewership and content at first. I’m not sure that’s the case today. People have moved on. There are more “fail” videos of people falling over and doing crazy stuff now. They make me laugh. What can I say? I’m weak.

SL: Let’s talk about a strength! How are you shaking up the online advertising business?

DORON: People love video ads and they generate tremendous value, but a few years ago the industry was hitting a roadblock because there wasn’t enough advertising space. Then the market started to embrace what we call “in place” advertising, which enables us to place video ads on non-video content. With the shift to mobile, that created a huge challenge. You have issues with the format, streaming conventions, and standards, and things don’t work very well. On the one hand, there was a huge opportunity to increase the supply of ad space, which was hugely in demand. At the same time, there was a major technical issue to solve.

We were established in the middle of last year to offer a sophisticated software layer that enables publishers to run video ads on video and non-video content. Our platform chooses the ad that will load the fastest, matches the user’s interests, and generates the best value for the advertiser and publisher. As long as you keep everyone happy, they will keep coming back.

SL: There is something of a backlash against advertising now, though, with users increasingly installing ad blockers. How can the advertising industry win them over?

DORON: There are a lot of sites out there that offer a very poor experience, but people don’t realize that slow loading times and buffering are not necessarily because of content delivery issues, poor infrastructure, or site mechanics. It’s a result of poor monetization techniques. Websites are trying to show ads that will maximize their revenue but often the ad behind that is not effective. Sorry for the self-promotion, but I believe that if you show the right ad to the right viewer with the lowest possible latency, everyone wins. If the wait times are low, the experience will be good.

SL: That’s an interesting point. What would you say has been your biggest challenge as a startup in this market?

DORON: We were blessed with very rapid growth, so the challenge for us was to provide a scalable platform. We were soon serving billions of ads per month. We needed someone we could count on to be both scalable and elastic, all over the world. So we’ve partnered with SoftLayer from the very beginning. We were extremely happy with the people and the level of support we were getting. As a startup, we really need that extra bit of support.

SL: And we’ve been pleased to provide it! What are your plans for the future?

DORON: We’re looking at TV advertising. The ability to match an ad to a specific viewer is coming in the next couple of years. Not necessarily to broadcast TV, but it’s coming. We’re trying to find areas where it makes sense to connect the advertisers online with TV audiences.

SL: Your focus is usually on the bits between the TV programs. But if we gave you the chance to edit any film or TV show, what would you change?

DORON: I would change the ending of Lost. It was epic. I watched all seven seasons of it, and this was when there were about 20 episodes per season. No spoilers, but I’d change it to something more original.

 

-Michalina

 

Categories: 
Keywords:
Categories:

Pages

Subscribe to sales