executive-blog

April 17, 2015

A Grandmother’s Advice for Startups: It’s Always a No ‘Til You Ask

Today my grandmother turns 95. She's in amazing shape for someone who's nearly a century old. She drives herself around, does her own grocery shopping, and still goes to the beauty parlor every other week to get her hair set.

Growing up less than a mile from her and my granddad, we spent a lot of time with them over the years. Of all of the support, comfort, and wisdom they imparted to me over that time, one piece of advice from my grandmother has stood the test of time. No matter where I was in the world, or what I was doing, it has been relevant and helpful. That advice is:

You never know unless you ask.

Simple and powerful, it has guided me throughout my life. Here are some ways you can put this to work for you.

Ask for the Introduction
Whether you're fundraising, hiring, selling, or just looking for feedback, you need to expand your network to reach the right people. The best way to do this is through strategic introductions. In the Catalyst program, making connections is part of our offering to companies. Introductions are such a regular part of my work in the startup community. In my experience, people want to help other people, so as long as you're not taking advantage of it, ask for introductions. You're likely to get a nice warm introduction, which can lead to a meeting.

Ask for the Meeting
Now that you have that introduction, ask for a meeting with a purpose in mind. Even if you don't have an introduction, many people in the startup world are approachable with a cold email.

Guy Kawasaki, former chief evangelist for Apple, and author of 13 books including The Art of the Start 2.0, wrote a fantastic post, "The Effective Emailer," on how to craft that all-important message with your ask.

Another great take on the email ask is from venture capitalist Brad Feld, "If You Want a Response, Ask Specific Questions." This post offers advice on how not to approach someone. The title of the post says it all, if you want a response, ask a specific question.

Ask for the Sale
Many startup founders don't have sales experience and so often miss this incredibly simple, yet incredibly important part of sales: asking for the sale. Even in mass-market B2C businesses, you'll be surprised how easy and effective it is to ask people to sign up. Your first sales will be high-touch and likely require a big time investment from your team. But all of that work will go to waste if you don't say, "Will you sign up to be our customer?" And if the answer is a no, then ask, "What are the next steps for working with you?"

Empower Yourself
It's empowering to ask for something that you want. This is the heart of my grandmother's advice. She is and has always been an empowered woman. I believe a big part of that came from not being afraid to ask for what she wanted. As long as you're polite and respectful in your approach, step up and ask.

The opposite of this is to meekly watch the world go by. If you do not ask, it will sweep you away on other people's directions. This is the path to failure as an entrepreneur.

The way to empower yourself in this world starts with asking for what you want. Whether it's something as simple as asking for a special order at a restaurant or as big as asking for an investment, make that ask. After all, you'll never know unless you ask.

-Rich

April 10, 2015

The SLayer Standard Vol. 1, No. 9

The week in review. All the IBM Cloud and SoftLayer headlines in one place.

Welcome to the Masters
If you’re not practicing your swing this weekend, you’re watching the Masters. Over the next couple of days, professional golfers will seek their shot at landing the coveted Green Jacket. And while everyone might be watching the leaderboard, IBM will be hard at work in what they are calling the “bunker,” located in a small green building at the Augusta National Golf Club.

What does IBM have to do with the Masters? Everything.

Read how IBM, backed by the power of the SoftLayer cloud, is making the Masters website virtually uncrashable.

And for those that can’t line the greens to watch your favorite player, IBM is utilizing the lasers the Golf Club has placed around the course to track the ball as it flies from hole-to-hole. Learn more about the golf-ball tracking technology here.

Open Happiness
In a move to streamline tech operations and cut costs, Coca-Cola Amatil is partnering with IBM Cloud to move some of its platforms to SoftLayer data centers in Sydney and Melbourne—a deal sure to open happiness.

"The move to SoftLayer will provide us with a game-changing level of flexibility, resiliency and reliability to ramp up and down capacity as needed. It will also remove the need for large expenditure on IT infrastructure." - Barry Simpson, CIO, Coca-Cola Amatil

Read more about the new CCA cloud environment and the five-year, multimillion-dollar deal.

-JRL

Categories: 
April 1, 2015

The SLayer Standard Vol. 1 No. 8

The week in review. All the IBM Cloud and SoftLayer headlines in one place.

Sunny Skies for IBM Cloud and The Weather Company
IBM made big headlines on Tuesday when it announced they would team up with The Weather Company boasting “100 percent chance of smarter business forecasts.”

Bloomberg sits down with Bob Picciano, IBM Analytics Senior VP, and David Kenny, The Weather Company CEO to discuss what makes this different than other companies that have analyzed the weather in the past. Using Watson Analytics and the Internet of Things, the partnership will transform business decision-making based on weather behavior. Read how IBM’s $3 billion investment in the Internet of Things will collect weather data from 100,000 weather stations around the world and turn it into meaningful data for business owners.

Indian Startups Choose SoftLayer
According to the National Association of Software and Services Companies (NASSCOM), India has the world’s third largest and the fastest-growing startup ecosystem. Like many SoftLayer startup customers, Goldstar Healthcare, Vtiger, Clematix, Ecoziee Marketing utilize the SoftLayer cloud infrastructure platform to “begin on a small scale and then expand rapidly to meet workload demands without having to worry about large investments in infrastructure development.”

New SoftLayer Storage Offerings
Last week, SoftLayer announced the launch of block storage and file storage complete with Endurance- and Performance-class tiers. The media was fast to report the new offerings that provide customers more choice, flexibility, and control for their storage needs and workloads.

“ … SoftLayer’s focus on tailored capacity and performance needs coincides with the trend in the cloud market of customizing technology based on different application requirements.”– IBM Splits SoftLayer Cloud Storage Into Endurance, Performance Tiers

“In the age of the cloud, the relationship between cloud storage capacity and I/O performance has officially become divorced.” – IBM Falls Into Cloud Storage Pricing Line

Pick your favorite online tech media and read all about it: SiliconANGLE, Computer Weekly, Data Center Knowledge, CRN, V3, Cloud Computing Intelligence, Storage Networking Solutions UK, and DCS Europe.

#IBMandTwitter
There are more than half a billion tweets posted to Twitter every day. IBM is teaming up with Twitter to turn those “tweets into insights for more than 100 organizations around the world.” Leon Sun of The Motley Fool takes a closer look at what the deal means to IBM and Twitter.

“Twitter provides a powerful new lens through which to look at the world. This partnership, drawing on IBM’s leading cloud-based analytics platform, will help clients enrich business decisions with an entirely new class of data. This is the latest example of how IBM is reimaging work.” – Ginni Romety, IBM Chairman, President and CEO

-JRL

Categories: 
March 30, 2015

The Importance of Data's Physical Location in the Cloud

If top-tier cloud providers use similar network hardware in their data centers and connect to the same transit and peering bandwidth providers, how can SoftLayer claim to provide the best network performance in the cloud computing industry?

Over the years, I've heard variations of that question asked dozens of times, and it's fairly easy to answer with impressive facts and figures. All SoftLayer data centers and network points of presence (PoPs) are connected to our unique global network backbone, which carries public, private, and management traffic to and from servers. Using our network connectivity table, some back-of-the-envelope calculations reveal that we have more than 2,500Gbps of bandwidth connectivity with some of the largest transit and peering bandwidth providers in the world (and that total doesn't even include the private peering relationships we have with other providers in various regional markets). Additionally, customers may order servers with up to 10Gbps network ports in our data centers.

For the most part, those stats explain our differentiation, but part of the bigger network performance story is still missing, and to a certain extent it has been untold—until today.

The 2,500+Gbps of bandwidth connectivity we break out in the network connectivity table only accounts for the on-ramps and off-ramps of our network. Our global network backbone is actually made up of an additional 2,600+Gbps of bandwidth connectivity ... and all of that backbone connectivity transports SoftLayer-related traffic.

This robust network architecture streamlines the access to and delivery of data on SoftLayer servers. When you access a SoftLayer server, the network is designed to bring you onto our global backbone as quickly as possible at one of our network PoPs, and when you're on our global backbone, you'll experience fewer hops (and a more direct route that we control). When one of your users requests data from your SoftLayer server, that data travels across the global backbone to the nearest network PoP, where it is handed off to another provider to carry the data the "last mile."

With this controlled environment, I decided to undertake an impromptu science experiment to demonstrate how location and physical distance affect network performance in the cloud.

Speed Testing on the SoftLayer Global Network Backbone

I work in the SoftLayer office in downtown Houston, Texas. In network-speak, this location is HOU04. You won't find that location on any data center or network tables because it's just an office, but it's connected to the same global backbone as our data centers and network points of presence. From my office, the "last mile" doesn't exist; when I access a SoftLayer server, my bits and bytes only travel across the SoftLayer network, so we're effectively cutting out a number of uncontrollable variables in the process of running network speed tests.

For better or worse, I didn't tell any network engineers that I planned to run speed tests to every available data center and share the results I found, so you're seeing exactly what I saw with no tomfoolery. I just fired up my browser, headed to our Data Centers page, and made my way down the list using the SpeedTest option for each facility. Customers often go through this process when trying to determine the latency, speeds, and network path that they can expect from servers in each data center, but if we look at the results collectively, we can learn a lot more about network performance in general.

With the results, we'll discuss how network speed tests work, what the results mean, and why some might be surprising. If you're feeling scientific and want to run the tests yourself, you're more than welcome to do so.

The Ookla SpeedTests we link to from the data centers table measured the latency (ping time), jitter (variation in latency), download speeds, and upload speeds between the user's computer and the data center's test server. To run this experiment, I connected my MacBook Pro via Ethernet to a 100Mbps wired connection. At the end of each speed test, I took a screenshot of the performance stats:

SoftLayer Network Speed Test

To save you the trouble of trying to read all of the stats on each data center as they cycle through that animated GIF, I also put them into a table (click the data center name to see its results screenshot in a new window):

Data Center Latency (ms) Download Speed (Mbps) Upload Speed (Mbps) Jitter (ms)
AMS01 121 77.69 82.18 1
DAL01 9 93.16 87.43 0
DAL05 7 93.16 83.77 0
DAL06 7 93.11 83.50 0
DAL07 8 93.08 83.60 0
DAL09 11 93.05 82.54 0
FRA02 128 78.11 85.08 0
HKG02 184 50.75 78.93 2
HOU02 2 93.12 83.45 1
LON02 114 77.41 83.74 2
MEL01 186 63.40 78.73 1
MEX01 27 92.32 83.29 1
MON01 52 89.65 85.94 3
PAR01 127 82.40 83.38 0
SJC01 44 90.43 83.60 1
SEA01 50 90.33 83.23 2
SNG01 195 40.35 72.35 1
SYD01 196 61.04 75.82 4
TOK02 135 75.63 82.20 2
TOR01 40 90.37 82.90 1
WDC01 43 89.68 84.35 0

By performing these speed tests on the SoftLayer network, we can actually learn a lot about how speed tests work and how physical location affects network performance. But before we get into that, let's take note of a few interesting results from the table above:

  • The lowest latency from my office is to the HOU02 (Houston, Texas) data center. That data center is about 14.2 miles away as the crow flies.
  • The highest latency results from my office are to the SYD01 (Sydney, Australia) and SNG01 (Singapore) data centers. Those data centers are at least 8,600 and 10,000 miles away, respectively.
  • The fastest download speed observed is 93.16Mbps, and that number was seen from two data centers: DAL01 and DAL05.
  • The slowest download speed observed is 40.35Mbps from SNG01.
  • The fastest upload speed observed is 87.43Mbps to DAL01.
  • The slowest upload speed observed is 72.35Mbps to SNG01.
  • The upload speeds observed are faster than the download speeds from every data center outside of North America.

Are you surprised that we didn't see any results closer to 100Mbps? Is our server in Singapore underperforming? Are servers outside of North America more selfish to receive data and stingy to give it back?

Those are great questions, and they actually jumpstart an explanation of how the network tests work and what they're telling us.

Maximum Download Speed on 100Mbps Connection

If my office is 2 milliseconds from the test server in HOU02, why is my download speed only 93.12Mbps? To answer this question, we need to understand that to perform these tests, a connection is made using Transmission Control Protocol (TCP) to move the data, and TCP does a lot of work in the background. The download is broken into a number of tiny chunks called packets and sent from the sender to the receiver. TCP wants to ensure that each packet that is sent is received, so the receiver sends an acknowledgement back to the sender to confirm that the packet arrived. If the sender is unable to verify that a given packet was successfully delivered to the receiver, the sender will resend the packet.

This system is pretty simple, but in actuality, it's very dynamic. TCP wants to be as efficient as possible ... to send the fewest number of packets to get the entire message across. To accomplish this, TCP is able to modify the size of each packet to optimize it for each communication. The receiver dictates how large the packet should be by providing a receive window to accommodate a small packet size, and it analyzes and adjusts the receive window to get the largest packets possible without becoming unstable. Some operating systems are better than others when it comes to tweaking and optimizing TCP transfer rates, but the processes TCP takes to ensure that the packets are sent and received without error takes overhead, and that overhead limits the maximum speed we can achieve.

Understanding the SNG01 Results

Why did my SNG01 speed test max out at a meager 40.35Mbps on my 100Mbps connection? Well, now that we understand how TCP is working behind the scenes, we can see why our download speeds from Singapore are lower than we'd expect. Latency between the sending and successful receipt of a packet plays into TCP’s considerations of a stable connection. Higher ping times will cause TCP to send smaller packet sizes than it would for lower ping times to ensure that no sizable packet is lost (which would have to be reproduced and resent).

With our global backbone optimizing the network path of the packets between Houston and Singapore, the more than 10,000-mile journey, the nature of TCP, and my computer's TCP receive window adjustments all factor into the download speeds recorded from SNG01. Looking at the results in the context of the distance the data has to travel, our results are actually well within the expected performance.

Because the default behavior of TCP is partially to blame for the results, we could actually tweak the test and tune our configurations to deliver faster speeds. To confirm that improvements can be made relatively easily, we can actually just look at the answer to our third question...

Upload > Download?

Why are the upload speeds faster than the download speeds after latency jumps from 50ms to 114ms? Every location in North America is within 2,000 miles of Houston, while the closest location outside of North America is about 5,000 miles away. With what we've learned about how TCP and physical distance play into download speeds, that jump in distance explains why the download speeds drop from 90.33Mbps to 77.41Mbps as soon as we cross an ocean, but how can the upload speeds to Europe (and even APAC) stay on par with their North American counterparts? The only difference between our download path and upload path is which side is sending and which side is receiving. And if the receiver determines the size of the TCP receive window, the most likely culprit in the discrepancy between download and upload speeds is TCP windowing.

A Linux server is built and optimized to be a server, whereas my MacOSX laptop has a lot of other responsibilities, so it shouldn't come as a surprise that the default TCP receive window handling is better on the server side. With changes to the way my laptop handles TCP, download speeds would likely be improved significantly. Additionally, if we wanted to push the envelope even further, we might consider using a different transfer protocol to take advantage of the consistent, controlled network environment.

The Importance of Physical Location in Cloud Computing

These real-world test results under controlled conditions demonstrate the significance of data's geographic proximity to its user on the user's perceived network performance. We know that the network latency in a 14-mile trip will be lower than the latency in a 10,000-mile trip, but we often don't think about the ripple effect latency has on other network performance indicators. And this experiment actually controls a lot of other variables that can exacerbate the performance impact of geographic distance. The tests were run on a 100Mbps connection because that's a pretty common maximum port speed, but if we ran the same tests on a GigE line, the difference would be even more dramatic. Proof: HOU02 @ 1Gbps v. SNG01 @ 1Gbps

Let's apply our experiment to a real-world example: Half of our site's user base is in Paris and the other half is in Singapore. If we chose to host our cloud infrastructure exclusively from Paris, our users would see dramatically different results. Users in Paris would have sub-10ms latency while users in Singapore have about 300ms of latency. Obviously, operating cloud servers in both markets would be the best way to ensure peak performance in both locations, but what if you can only afford to provision your cloud infrastructure in one location? Where would you choose to provision that infrastructure to provide a consistent user experience for your audience in both markets?

Given what we've learned, we should probably choose a location with roughly the same latency to both markets. We can use the SoftLayer Looking Glass to see that San Jose, California (SJC01) would be a logical midpoint ... At this second, the latency between SJC and PAR on the SoftLayer backbone is 149ms, and the latency between SJC and SNG is 162ms, so both would experience very similar performance (all else being equal). Our users in the two markets won't experience mind-blowing speeds, but neither will experience mind-numbing speeds either.

The network performance implications of physical distance apply to all cloud providers, but because of the SoftLayer global network backbone, we're able to control many of the variables that lead to higher (or inconsistent) latency to and from a given data center. The longer a single provider can route traffic, the more efficiently that traffic will move. You might see the same latency speeds to another provider's cloud infrastructure from a given location at a given time across the public Internet, but you certainly won't see the same consistency from all locations at all times. SoftLayer has spent millions of dollars to build, maintain, and grow our global network backbone to transport public and private network traffic, and as a result, we feel pretty good about claiming to provide the best network performance in cloud computing.

-@khazard

March 27, 2015

Building “A Thing” at Hackster.io’s Hardware Weekend

Introduction to Hackster.io

Over the weekend in San Francisco, I attended a very cool hackathon put together by the good folks at Hackster.io. Hackster.io’s Hardware Weekend is a series of hackathons all over the country designed to bring together people with a passion for building things, give them access to industry mentors, and see what fun and exciting things they come up with in two days. The registration desk was filled with all kinds of hardware modules to be used for whatever project you could dream up—from Intel Edison boards, the Grove Starter Kit, a few other things that I have no idea what they did, and of course, plenty of stickers.

After a delicious breakfast, we heard a variety of potential product pitches by the attendees, then everyone split off into groups to support their favorite ideas and turn them into a reality.

When not hard at work coding, soldering, or wiring up devices, the attendees heard talks from a variety of industry leaders, who shared their struggles and what worked for their products. The founder of spark.io gave a great talk on how his company began and where it is today.

Building a thing!
After lunch, Phil Jackson, SoftLayer’s lead technology evangelist, gave an eloquent crash course in SoftLayer and how to get your new thing onto the Internet of Things. Phil and I have a long history in Web development, so we provided answers to many questions on that subject. But when it comes to hardware, we are fairly green. So when we weren't helping teams get into the cloud, we tried our hand at building something ourselves.

We started off with some of the hardware handouts: an Edison board and the Grove Starter Kit. We wanted to complete a project that worked in the same time the rest of the teams had—and showed off some of the power of SoftLayer, too. Our idea was to use the Grove Kit’s heat sensor, display it on the LCD, and post the result to a IBM Cloudant database, which would then be displayed on a SoftLayer server as a live updating graph.

The first day consisted mostly of Googling variations on “Edison getting started,” “read Grove heat sensor,” “write to LCD”, etc. We started off simply, by trying to make an LED blink, which was pretty easy. Making the LED STOP blinking, however, was a bit more challenging. But we eventually figured out how to stop a program from running. We had a lot of trouble getting our project to work in Python, so we eventually admitted defeat and switched to writing node.js code, which was significantly easier (mostly because everything we needed was on stackoverflow).

After we got the general idea of how these little boards worked, our project came together very quickly at the end of Day 2—and not a moment too soon. The second I shouted, “IT WORKS!” it was time for presentations—and for us to give out the lot of Raspberry Pi we brought to some lucky winners.

And, without further ado, we present to you … the winners!

BiffShocker

This team wanted to mod out the Hackster’s DeLorean time machine to prevent Biff (or anyone else) from taking it out for a spin. They used a variety of sensors to monitor the DeLorean for any unusual or unauthorized activity, and if all else failed, were prepared to administer a deadly voltage through the steering wheel (represented by harmless LEDs in the demo) to stop the interloper from stealing their time machine. The team has a wonderful write up of the sensors they used, along with the products used to bring everything together.

This was a very energetic team who we hope will use their new Raspberry Pis to keep the space-time continuum clear.

KegTime

The KegTime project aimed to make us all more responsible drinkers by using an RFID reader to measure alcohol consumption and call Uber for you when you have had enough. They used a SoftLayer server to host all the drinking data, and used it to interact with Uber’s API to call a ride at the appropriate moment. Their demo included a working (and filled) keg with a pretty fancy LED-laden tap, which was very impressive. In recognition of their efforts to make us all more responsible drinkers, we awarded them five Raspberry Pis so they can continue to build cool projects to make the world a better place.

The Future of Hackster.io
Although this is the end of the event in San Francisco, there are many more Hackster.io events coming up in the near future. I will be going to Phoenix next on March 28 and look forward to all the new projects inventors come up with.

Be happy and keep hacking!

-Chris

Categories: 
March 25, 2015

Introducing New Block Storage and File Storage

Everyone knows data growth is exploding. The chart below illustrates data growth—in zettabytes—over the last 11 years.

Storing all that data can get complicated. The rise of cloud computing and virtualization has led to myriad options for data storage. Kevin Trachier did a great job of defining and highlighting the differences in various cloud storage options in his blog post, Which storage solution is best for your project?

Today, I’m excited to announce that we’ve expanded SoftLayer’s cloud storage portfolio to include two new storage products: block storage and file storage, both featuring Performance and Endurance options. These storage offerings allow you to create storage volumes or shares and connect them to your bare metal or virtual servers using either NFS or iSCSI connectivity.

The Endurance and Performance classes of both block storage and file storage feature:

  • Storage sizes to fit any application—from 20GB to 12TB
  • Highly available connectivity—redundant networking connections reduce risk and mitigate against unplanned events to provide business continuity
  • Allocated IOPS—meet any workload requirement through customizable levels of IOPS that are there when you need them
  • Durable and Resilient —infrastructure provides safety of mind against data loss without managing system-level RAID arrays
  • Concurrent Access—multiple hosts can simultaneously access both block and file volumes in support of advanced use cases such as clustered databases

The Endurance class of both block storage and file storage is available in three tiers, allowing you can choose the right balance of performance and cost for your needs:

  • 0.25 IOPS per GB is designed for workloads with low I/O intensity. Example applications include storing mailboxes or departmental level file shares.
  • 2 IOPS per GB is designed for most general purpose use. Example applications include hosting small databases backing Web applications or virtual machine disk images for a hypervisor.
  • 4 IOPS per GB is designed for higher intensity workloads. Example applications include transactional and other performance-sensitive databases.

All Endurance tiers support snapshots and replication to remote data centers.

We designed the Performance class of both block storage and file storage to support high I/O applications like relational databases that require consistent levels of performance. Block volumes and file shares can be provisioned with up to 6,000 IOPS and 96MB/s of throughput.

Available sizes and IOPS combinations:

Block storage and file storage are available in SoftLayer data centers worldwide. SoftLayer customers can log in to the customer portal and start using them today.

-Michael

March 23, 2015

Redefining the Startup Accelerator Business Model: An Interview with HIGHLINE’S Marcus Daniels

In this interview, SoftLayer’s community development lead in Canada, Qasim Virjee, sits down with Marcus Daniels, the co-founder and CEO of HIGHLINE, a venture-backed accelerator based in Vancouver and Toronto.

QV: Y Combinator has become an assumed standard for accelerators by creating its own business model. What do you think is both good and bad about this?

MD: Y Combinator (YC) not only created a new model for funding tech startups, but it also evolved the whole category. Historically, I like to think that Bill Gross's Idealab represented accelerator/incubator 1.0 and YC evolved that to 2.0 over the past decade, resulting in a hit parade of meaningful startups that are changing the world.

The good is that YC has created a “high quality” bar and led the standardization of micro-seed investment docs for the betterment of the whole startup ecosystem. It proved the model and has helped hundreds of amazing founders with venture profile businesses that are changing the world.

The bad is that there are now thousands of accelerators/incubators globally running generic programs that don't help founders much. More than half have a horrible rate helping startups raise follow-on capital and almost all never had a single exit from a startup they invested in.

HIGHLINE has a strong track record in our short history and now sees a big opportunity to be amongst the leaders in the evolution of the accelerator industry.

QV: Many accelerators focus on streamlining a program to process cohorts of companies at regular intervals throughout the year, every year. Often, the high throughput these programs expect means they must select companies from applications, rather than the approach you seem to be taking. Can you explain how HIGHLINE is sourcing companies for investment?

MD: HIGHLINE gets over 800 applications a year and targets about 20–30 investments during that time. Out of our last 12 investments, all had either come from referral partners or the team hunting the best founders to be part of our portfolio. Over the years, we have moved from the ideation stage, which comprises the majority of inbound applications, to the MVP in market stage, which is our sweet spot now. We will also focus on low-volume, high-touch advisory support, which is why a lot of time is spent building relationships with founders and adding value to MVP-stage startups before investing helps curate better deals.

QV: Traditionally, investment vehicles (such as VC firms and accelerator programs) have been run by financial industry types, but it seems that you are taking a more entrepreneurial approach with HIGHLINE and constantly evolving your business model. What can you tell me about this?

MD: The best accelerator leaders globally are past entrepreneurs who have some investment experience given how hands-on you have to be with the companies. Without the experience of starting and growing ventures, it is really hard to help tech founders navigate the daily challenges. Also, the best founders get to choose, and they want to work with other top founders in a long-term mentor/advisory/coaching relationship.

QV: How does being “VC-backed” differentiate HIGHLINE from other accelerators?

MD: Having several VCs as investors, such as the BDC and Relay Ventures, gives us an edge in several ways. Firstly, they are not only a great quality referral network for deals, but also a huge help in getting our companies venture-ready—even if they may not invest directly. Secondly, they allow us to internally focus on a specialization in helping venture profile businesses raise follow-on capital, as opposed to the glut of programs that are optimized for entrepreneurial education and lifestyle job creation. Lastly, they put big pressure on the whole HIGHLINE team to both get results for shareholders and build something unique that can be a category leader over the next decade.

QV: Our country is physically large and this seems to have created differentiated tech startup scenes between its cities. How does HIGHLINE collapse the geographic divide by having a physical presence in both Vancouver and Toronto?

MD: HIGHLINE tries to curate and unite the best digital founders, institutional investors, and ecosystem partners across Canada. We position our offices in both Vancouver and Toronto as portfolio hubs for founders who want to be headquartered in Canada, but want to take on the world. Most importantly, we spend time in all major Canadian startup ecosystems and have plans for unique events to bring our curated community closer together.

- Qasim

March 20, 2015

Startups: Always Be Hiring

In late 2014, I was at a Denver job fair promoting an event I was organizing, NewCo Boulder. All the usual suspects of the Colorado tech community were there; companies ranging in size from 50 to 500 employees. It's a challenge to stand out from the crowd when vying for the best talent in this competitive job market, so the companies had pop-up banners, posters, swag of every kind on the table, and swarms of teams clad in company t-shirts to talk to everyone who walked by.

Nestled amid the dizzying display of logos was MediaNest, a three-person, pre-funding startup in the Catalyst program, at the time they were in the Boomtown Boulder fall 2014 cohort. What the heck was a scrappy startup doing among the top Colorado tech companies? In a word: hiring.

MediaNest was there to hire for three roles: front end developer, back end developer, and sales representative. They were there to double the size of their team ... when they had the money. In the war for talent, they started early and were doing it right.

I've often heard VCs (venture capitalists) and highly successful startup CEOs say the primary roles for a startup CEO are to always keep money in the bank and butts in seats. Both take tremendous time and energy, and they go hand-in-hand. It takes months to close a funding round, and similarly, it takes months to fill roles with the right people. If you're just getting started with hiring once that money is in the bank, you're starting from a deficit, burning capital, and straining resources while you get the recruiting gears going.

The number one resource for startup hiring is personal networks. Start with your friends and acquaintances and let everyone know you're looking to fill specific roles, even as you're out raising the capital to pay them. As the round gets closer to closing, intensify your efforts and expand your reach.

But what happens if you find someone perfect before you’re ready to hire them? Julien Khaleghy, CEO of MediaNest, says, "It's a tricky question. We will tend to be generous on the equity portion and conservative on the salary portion. If a comfortable salary is a requirement for the person, we will lock them for our next round of funding."

MediaNest wasn’t funded when I saw them in Denver, and they weren’t ready to make offers, so why attend a job fair? Khaleghy adds, based on his experience as CEO, "It's actually a good thing to show a letter of intent to hire someone when you are raising money."

At that job fair in Denver, MediaNest, with its simple table and two of the co-founders present, was just as busy that day as the companies with a full complement of staff giving away every piece of imaginable swag. I recommend following their example and getting ahead of the hiring game.

As long as you're successful, you'll never stop hiring. So start today.

-Rich

March 18, 2015

SoftLayer, Bluemix and OpenStack: A Powerful Combination

Building and deploying applications on SoftLayer with Bluemix, IBM’s Platform as a Service (PaaS), just got a whole lot more powerful. At IBM’s Interconnect, we announced a beta service for deploying OpenStack-based virtual servers within Bluemix. Obviously, the new service is exciting because it brings together the scalable, secure, high-performance infrastructure from SoftLayer with the open, standards-based cloud management platform of OpenStack. But making the new service available via Bluemix presents a particularly unique set of opportunities.

Now Bluemix developers can deploy OpenStack-based virtual servers on SoftLayer or their own private OpenStack cloud in a consistent, developer-friendly manner. Without changing your code, your configuration, or your deployment method, you can launch your application to a local OpenStack cloud on your premises, a private OpenStack cloud you have deployed on SoftLayer bare metal servers, or to SoftLayer virtual servers within Bluemix. For instance, you could instantly fire up a few OpenStack-based virtual servers on SoftLayer to test out your new application. After you have impressed your clients and fully tested everything, you could deploy that application to a local OpenStack cloud in your own data center ̶all from within Bluemix. With Bluemix providing the ability to deploy applications across cloud deployment models, developers can create an infrastructure configuration once and deploy consistently, regardless of the stage of their application development life cycle.

OpenStack-based virtual servers on SoftLayer enable you to manage all of your virtual servers through standard OpenStack APIs and user interfaces, and leverage the tooling, knowledge and process you or your organization have already built out. So the choice is yours: you may fully manage your virtual servers directly from within the Bluemix user interface or choose standard OpenStack interface options such as the Horizon management portal, the OpenStack API or the OpenStack command line interface. For clients who are looking for enterprise-class infrastructure as a service but also wish to avoid getting locked in a vendor’s proprietary interface, our new OpenStack standard access provides clients a new choice.

Providing OpenStack-based virtual servers is just one more (albeit major) step toward our goal of providing even more OpenStack integration with SoftLayer services. For clients looking for enterprise-class Infrastructure as a Service (IaaS) available globally and accessible via standard OpenStack interfaces, OpenStack-based virtual servers on SoftLayer provide just what they are looking for.

The beta is open now for you to test deploying and running servers on the new SoftLayer OpenStack public cloud service through Bluemix. You can sign up for a Bluemix 30-day free trial.

- @marcalanjones

March 12, 2015

Sydney’s a Go

Transforming an empty room into a fully operational data center in just three months: Some said it couldn’t be done, but we did it. In less than three months, actually.

Placing a small team on-site and turning an empty room into a data center is what SoftLayer refers to as a Go Live. Now, of course there is more to bringing a data center online than the just the transformation of an empty room. In the months leading up to the Go Live deployment, there are details to work out, contracts to sign, and the electrical fit out (EFO) of the room itself. During my time with SoftLayer I have been involved in building several of our data centers, or SoftLayer pods as we call them. Pods are designed to facilitate infrastructure scalability, and although they have evolved over the years as newer, faster equipment has become available, the original principles behind the design are still intact—so much so that a data center technician could travel to any SoftLayer data center in the world and start working without missing a beat. And the same holds true to building a pod from the ground up. This uniformity is what allows us to fast track the build out of a new SoftLayer pod. This is one of the reasons why the Sydney data center launch was such a success.

Rewind Three Months

When we landed in Sydney on December 11, 2014, we had an empty server room and about 125 pallets of gear and equipment that had been carefully packed and shipped by our inventory and logistics team. First order of business: breaking down the pallets, inspecting the equipment for any signs of damage and checking that we received everything needed for the build. It’s really quite impressive to know that everything from screwdrivers to our 25U routers to even earplugs had been logged and accounted for. When you are more than 8,500 miles away from your base of operations, it’s imperative that the Go Live team has everything it needs on hand from the start. Something seemingly inconsequential as not having the proper screws can lead to costly delays during the build. Once everything’s been checked off, the real fun begins.


(From Left) Jackie Vong, Dennis Vollmer, Jon Bowden, Chris Stelly, Antonio Gomez, Harpal Singh, Kneeling - Zachary Schacht, Peter Panagopoulos, and Marcelo Alba

Next we set up the internal equipment that powers the pod: four rows of equipment that encompass everything from networking gear to storage to the servers that run various internal systems. Racking the internal equipment is done according to pre-planned layouts and involves far too many cage nuts, the bane of every server build technician’s existence.

Once the internal rows are completed, it’s time to start focusing on the customer rows that will contain bare metal and virtual servers. Each customer rack contains a minimum of five switches—two for the private network, two for the public network, and one out-of-band management switch. Each row has two power strips and in the case of the Sydney data center, two electrical transfer switches at the bottom of the rack that provide true power redundancy by facilitating the transfer of power from one independent feed to another in the case of an outage. Network cables from the customer racks route back to the aggregate switch rack located at the center of each row.

Right around the time we start to wrap up the internal and customer rows, a team of network engineers arrive on-site to run the interconnects between the networking gear and the rest of the internal systems and to light up the fiber lines connecting our new pod to our internal network (as well as the rest of the world). This is a big day because not only do we finally get Wi-Fi up in the pod, but no longer are we isolated on an island. We are connected, and teams thousands of miles away can begin the process of remotely logging in to configure, deploy, and test systems. The networking team will start work on configuring the switches, load balancers, and firewalls for their specific purposes. The storage team will begin the process of bringing massive storage arrays online, and information systems will start work on deploying the systems that manage the automation each pod provides.


(From Left) Zach Robbins, Grayson Schmidt, Igor Gorbatok and Alex Abin

During this time, we start the process of onboarding the newest members of the team, the local Sydney techs, who in a few short months will be responsible for managing the data center independently. But before they fully take over, customer racks are prepped and are waiting to house the final piece of the puzzle: the servers. They arrive via truck day [check out DAL05 Pod 2 truck day]; Sydney’s was around the beginning of February. Given the amount of hardware we typically receive, truck days are an event unto themselves—more than 1,500 of the newest and fastest SuperMicro servers of various shapes and sizes that will serve as the bare metal and virtual servers for our customers. Through a combination of manpower and automation, these servers get unboxed, racked, checked in, and tested before they are sold to our customers.

Now departments involved in bringing the Sydney data center online wrap up and sign off. Then we go live.

Bringing a SoftLayer pod online and on time is a beautifully choreographed process and is one of my greatest professional accomplishments. The level of coordination and cohesion required to pull it off, not once, not twice but ten times all over the world in the last year alone can’t be overstated enough.

-Dennis

Pages

Subscribe to executive-blog