Tips And Tricks Posts

March 25, 2016

Be an Expert: Handle Drive Failures with Ease

Bare metal servers at SoftLayer employ best-in-class and industry proven SAS, SATA, or SSD disks, which are extensively tested and qualified in-house by the data center technicians. They are reliable and are enterprise grade hardware. However, single-point device failure cannot be neglected for unforeseen circumstances. HDD or device failures could happen for various reasons like power surge, mechanical/internal failure, drive firmware bugs, overheating, aging, etc. Though all efforts are made to mitigate these issues by selecting the best-in-class hard drives and pre-tested devices before making them available to customer, one could still run into drive failures occasionally.

Is having RAID protection just good enough?

Drive failures on dedicated bare metal servers may cause data loss, downtime, and service interruptions if they are not adequately deployed with a risk mitigation plan. As a first line of defense, users choose to have RAID at various levels. This may seem sufficient but may have the following problems:

  • Volume associated with the failed drive becomes degraded. This brings the VD performance below acceptable level. A degraded volume is most likely to disable write-back caching and further degrades write performance as well.
  • There is always a chance of another disk failing in the meantime. Unless a new disk is inserted and a rebuild is completed, a second disk failure could be catastrophic.    

Today a manual response to disk failure may take quite some time between when the user gets notified or becomes aware that the disks have failed and when a technician is involved to change the disks at the servers. During this time, a second disk failure is looming large over the user—while the system is in a degraded state.

To mitigate this risk, SoftLayer recommends that users always have a Global Hot Spare or Dedicated Hot Spare Disks wherever available on the bare metal servers. Users can choose one or more Hot Spare disks per server. This typically requires the user to earmark a drive slot for hot spares. It is recommended while ordering bare metal servers to take into consideration having empty drive slots for global hot spare drives.

Adding Hot Spare on a LSI MegaRAID Adaptor

Users can use WebBIOS utility or MegaRAID Storage Manager to add Hot Spare drive.

It is easiest to configure using MegaRAID Storage Manager Software,  available on the AVAGO website

Once logged in, you’ll will want to choose the Logical tab to view the unused disks under the “Unconfigured Drives.” Right-clicking and selecting “Assign Global Hot Spare” will make sure this drive is standby for any drive failure for any of the RAID volumes configured in the system. You can also choose to have Dedicated Hot Spare for specific volumes, which are critical. Figure 1 shows how to add a Global Hot Space using MSM. MegaRAID Storage Manager can also be used to access the server from a third-party machine or service laptops by providing the server IP address.

Figure 1 shows how to add a Global Hot Space using MSM.

You can also use the WebBios interface to add Hot Spare drives. This is done by breaking into the card BIOS at the early stage of booting by using Ctrl+R to access the BIOS Configuration Utility. As a prerequisite for accessing the KVM screen to see the boot time messages, you’ll need to VPN into the SoftLayer network and use KVM under the “Actions” dropdown in the customer portal.

Once inside the WebBIOS screen, access the “PD Mgmt” tab and choose a free drive. Pressing F2 on the highlighted drive will display a menu for making the drive as a Global Hot Spare. Figure 2 below provides more details for making a Hot Spare using BIOS interface. We recommend using virtual keyboard while navigating and issuing commands in the KVM viewer.

Figure 2 provides more details for making a Hot Spare using BIOS interface.

Adding Hot Spare Through Adaptec Adaptor

Adaptec also provides the Adaptec Storage Manager and a BIOS option to add Global Hot Spares.

The Adaptec Storage Manager comes preinstalled on SoftLayer servers for the supported chosen OS. This can also be downloaded for the specific Adaptec card from this link. After launching the Adaptec Storage Manager, users can select a specific available free drive and create a global hot spare drive as shown in Figure 3.

After launching the Adaptec Storage Manager, users can select a specific available free drive and create a global hot spare drive as shown in Figure 3.

Adaptec also provides a BIOS-based configuration utility that can be used to add a Hot Spare. To do this, you’ll need to break into the BIOS utility by using Ctrl+A at the early boot. After that, select the Global Hot Spares from the main menu to enter the drive selection page. Select a drive by pressing Insert and Enter to submit changes. Figure 4 below depicts the selection of a Global Hot Spare using BIOS configuration utility.

Figure 4 depicts the selection of a Global Hot Spare using BIOS configuration utility.

Using Hot Spares reduces a risk of further drive failures and also lowers the time the system remains in degraded state. We recommend  SoftLayer customers leverage these benefits on their bare metal servers to be better armed against drive failures.

-Subramanian

March 24, 2016

future.ready(): 7 Things to Check Off Your Big Data Development List

Frank Ketelaars, Big Data Technical Leader for Europe at IBM, offers a checklist that every developer should have pinned to their board when starting a big data project. Editor’s Note: Does your brain switch off when you hear industryspeak words like “innovation,” “transformation,” “leading edge,” “disruptive,” and “paradigm shift”? Go on, go ahead and admit it. Ours do, too. That’s why we’re launching the future.ready() series—consisting of blogs, podcasts, webinars, and Twitter chats— with content created by developers, for developers. Nothing fluffy, nothing buzzy. With the future.ready() series, we aim to equip you with tools and knowledge that you can use—not just talk and tweet about.

For the first edition, I’ve invited Frank Ketelaars, an expert in high volume data space, to walk us through seven things to check off when starting a big data development project.

-Michalina Kiera, SoftLayer EMEA senior marketing manager

 

This year, big data moves from a water cooler discussion to the to-do list. Gartner estimates that more than 75 percent of companies are investing or planning to invest in big data in the next two years.

I have worked on multiple high volume projects in industries that include banking, telecommunications, manufacturing, life sciences, and government, and in roles including architect, big data developer, and streaming analytics specialist. Based on my experience, here’s a checklist I put together that should give developers a good start. Did I miss anything? Join me on the Twitter chat or webinar to share your experience, ask questions, and discuss further. (See details below.)     

1. Team up with a person who has a budget and a problem you can solve.

For a successful big data project, you need to solve a business problem that’s keeping somebody awake at night. If there isn’t a business problem and a business owner—ideally one with a budget— your project won’t get implemented. Experimentation is important when learning any new technology. But before you invest a lot of time in your big data platform, find your sponsor. To do so, you’ll need to talk to everyone, including IT, business users, and management. Remember that the technical advantages of analytics at scale might not immediately translate into business value.

2. Get your systems ready to collect the data.

With additional data sources, such as devices, vehicles, and sensors connected to networks and generating data, the variety of information and transportation mechanisms has grown dramatically, posing new challenges for the collection and interpretation of data.

Big data often comes from sources outside the business. External data comes at you in a variety of formats (including XML, JSON, and binary), and using a variety of different APIs. In 2016, you might think that everyone is on REST and JSON, but think again: SOAP still exists! The variety of the data is the primary technical driver behind big data investments, according to a survey of 402 business and IT professionals by management consultancy NewVantage Partners[SM1] . From one day to the next, the API might change or a source might become unavailable.

Maybe one day we’ll see more standardization, but it won’t happen any time soon. For now, developers must plan to spend time checking for changes in APIs and data formats, and be ready to respond quickly to avoid service interruptions. And to expect the unexpected.

3. Make sure you have the right to use that data.

Governance is a business challenge, but it’s going to touch developers more than ever before—from the very start of the project. Much of the data they will be handling is unstructured, such as text records from a call center. That makes it hard to work out what’s confidential, what needs to be masked, and what can be shared freely with external developers. Data will need to be structured before it can be analyzed, but part of that process includes working out where the sensitive data is, and putting measures in place to ensure it is adequately protected throughout its lifecycle.

Developers need to work closely with the business to ensure that they can keep data safe, and provide end users with a guarantee that the right data is being analyzed and that its provenance can be trusted. Part of that process will be about finding somebody who will take ownership of the data and attest to its quality.

4. Pick the right tools and languages.

With no real standards in place yet, there are many different languages and tools used to collect, store, transport, and analyze big data. Languages include R, Python, Julia, Scala, and Go (plus the Java and C++ you might need to work with your existing systems). Technologies include Apache Pig, Hadoop, and Spark, which provide massive parallel processing on top of a file system without Hadoop. There’s a list of 10 popular big data tools here, another 12 here, and a round-up of 45 big data tools here. 451 Research has created a map that classifies data platforms according to the database type, implementation model, and technology. It’s a great resource, but its 18-color key shows how complex the landscape has become.

Not all of these tools and technologies will be right for you, but they hint at one way the developer’s core competency must change. Big data will require developers to be polyglots, conversant in perhaps five languages, who specialize in learning new tools and languages fast—not deep experts in one or two languages.

Nota bene: MapReduce and Pig are among the top highest paid technology skills in the US, and other big data skills are likely to be highly sought-after as the demand for them also grows. Scala is a relatively new functional programming language for data preparation and analysis, and I predict it will be in high demand in the near future.

5. Forget “off-the-shelf.” Experiment and set up a big data solution that fits your needs. 

You can think of big data analytics tools like Hadoop as a car. You want to go to the showroom, pay, get in, and drive away. Instead, you’re given the wheels, doors, windows, chassis, engine, steering wheel, and a big bag of nuts and bolts. It’s your job to assemble it.

As InfoWorld notes, DevOps tools can help to create manageable Hadoop solutions. But you’re still faced with a lot of pieces to combine, diverse workloads, and scheduling challenges.

When experimenting with concepts and technologies to solve a certain business problem, also think about successful deployment in the organization. The project does not stop after the proof.

6. Secure resources for changes and updates.

Apache Hadoop and Apache Spark are still evolving rapidly and it is inevitable that the behavior of components will change over time and some may get deprecated shortly after initial release. Implementing new releases will be painful, and developers will need to have an overview of the big data infrastructure to ensure that as components change, their big data projects continue to perform as expected.

The developer team must plan time for updates and deprecated features, and a coordinated approach will be essential for keeping on top of the change.

7. Use infrastructure that’s ready for CPU and I/O intensive workloads.

My preferred definition of big data (and there are many – Forbes found 12) is this: "Big data is when you can no longer afford to bring the data to the processing, and you have to do the processing where the data is."

In traditional database and analytics applications, you get the data, load it onto your reporting server, process it, and post the results to the database.

With big data, you have terabytes of data, which might reside in different places—and which might not even be yours to move. Getting it to the processor is impractical. Big data technologies like Hadoop are based on the concept of data locality—doing the processing where the data resides.

You can run Hadoop in a virtualized environment. Virtual servers don’t have local data, though, so the time taken to transport data between the SAN or other storage device and the server hurts the application’s performance. Noisy neighbors, unpredictable server speeds and contested network connections can have a significant impact on performance in a virtualized environment. As a result, it’s difficult to offer service level agreements (SLAs) to end users, which makes it hard for them to depend on your big data implementations.

The answer is to use bare metal servers on demand, which enable you to predict and guarantee the level of performance your application can achieve, so you can offer an SLA with confidence. Clusters can be set up quickly, so you can accelerate your project really fast. Because performance is predictable and consistent, it’s possible to offer SLAs to business owners that will encourage them to invest in the big data project and rely on it for making business decisions.

How can I learn more?

Join me in the Twitter chat and webinar (details below) to discuss how you’re addressing big data or have your questions answered by me and my guests.  

Add our Twitter chat to your calendar. It happens Thursday, March 31 at 1 p.m. CET. Use the hashtag #SLdevchat to share your views or post your questions to me.

Register for the webinar on Wednesday, Apr 20, at 5 p.m. to 6 p.m. CET.

 

About the author

Frank Ketelaars has been Big Data Technical Leader in Europe for IBM since August 2013. As an architect, big data developer, and streaming analytics specialist, he has worked on multiple high volume projects in banking, telecommunications, manufacturing, life sciences and government. He is a specialist in Hadoop and real-time analytical processing.


 

February 10, 2016

The Compliance Commons: Do you know our ISOs?

Editor’s note: This is the first of a three-part series designed to address general compliance topics and to answer frequently asked compliance questions.

How many times have you been asked by a customer if SoftLayer is ISO compliant?  Do you ever find yourself struggling for an immediate answer?  If so, you're not alone. 

ISO stands for International Organization for Standardization. The organization has published more than 19,000 international standards, covering almost all aspects of technology and business. If you have any questions about a specific ISO standard, you can search the ISO website. If you would like the full details of any ISO standard, an online copy of the standard can be purchased through their website. 

SoftLayer holds three ISO certifications, and we’re going after more. We offer industry standard best security practices relating to cloud infrastructure, including: 

ISO/IEC 27001: This certification covers the information security management process. It certifies that SoftLayer offers best security practices in the industry relating to cloud infrastructure as a service (IaaS). Going through this process and obtaining certification means that SoftLayer observes industry best practices in offering a safe and secure place to live in the cloud. It also means that our information security management practices adhere to strict, internationally recognized best practices.

ISO/IEC 27018: This certifies that SoftLayer follows the most stringent code of practice for protection of personally identifiable information (PII) in public clouds acting as PII processors. It establishes commonly accepted control objectives, controls, and guidelines for implementing measures to protect PII in accordance with the privacy principles in ISO/IEC 29100 for the public cloud computing environment. While not all of SoftLayer is public and while we have very distinct definitions for processing PII for customers, we decided to obtain the certification to solidify our security and privacy principles as robust.

ISO/IEC 27017: This is a code of practice for information security controls for cloud services.  It’s the global standard for cloud security practices—not only for what SoftLayer should do, but also for what our customers should do to protect information. SoftLayer’s ISO 27017 certification demonstrates our continued commitment to upholding the highest, most secure information security controls and applying them effectively and efficiently to our cloud infrastructure environment. The standard provides guidance in, but not limited to, the following areas:

  • Information Security
  • Human Resources
  • Asset Management
  • Access Control
  • Cryptography
  • Physical and Environmental Security
  • Operations Security
  • Communications Security
  • System Acquisition, Development & Maintenance
  • Supplier Relations
  • Incident Management
  • Business Continuity Management
  • Compliance
  • Network Security

How can SoftLayer’s ISO certification benefit me as a customer?

Customers can leverage SoftLayer’s certifications as long as it’s done in the proper manner. Customers cannot claim that they’re ISO certified just because they’re using SoftLayer infrastructure. That’s not how it works. SoftLayer’s ISO certifications may make it easier for customers to become certified because they can leverage our certification for the SoftLayer boundary. Our SOC2 report (available through our customer portal or sales team) describes our boundary in greater detail: the customers are not responsible for certifying what’s inside SoftLayer’s boundary.  

ISO File

How does SoftLayer prove its ISO compliance?

SoftLayer’s ISO Certificates of Registration are publicly available on our website and on our third-party assessor’s website. By design, our ISO certificates denote that we conform to and meet all the applicable objectives of each standard. Since the ISO standards are steadfast and constant controls for everyone, we don’t offer our reports from the audits, but we can provide our certificates.

What SoftLayer data centers are applicable to the ISO certifications?

All of them! Each ISO certificate is applicable to every one of our data centers, in the U.S. and internationally. SoftLayer obtained ISO certifications on every one of our facilities because we operate with consistency across the globe. When a new SoftLayer data center comes online, there is some lag time between opening and certification because we need to be reviewed by our third-party assessor and have operational evidence available to support our data center certification. But as soon as we obtain the certifications, we’ll make them available.

Visit www.softlayer.com/compliance for a full list of our certifications and reports. They can also be found through the customer portal.

-Dana

 

February 5, 2016

Enable SSD caching on Bare Metal Server for 10X IOPS Improvements

Have you ever wondered how you could leverage the benefits of an SSD at the cost of cheap SATA hard drives?

SSDs provide extremely high IOPS for read and writes and are really tempting for creating volumes, which are IOPS centric. However, because SSD prices are significantly higher than SATA drives, IT managers are at a crossroad and must decide whether to go for SSDs and burn a fortune on them or stay with SATA drives.

But there is a way to use SATA drives and experience SSD performance using some intelligent caching techniques. If you have the right PCI RAID card installed on bare metal servers, you can leverage certain SSD caching feature benefits.

Make sure when configuring a bare metal server, which has sufficient drives bays (8+ the least), to have a LSI (AVAGO) MegaRAID card as the chosen RAID card. You can select the appropriate RAID configuration for OS and other workload data during the order process itself so that the RAIDs come preconfigured with them. As an additional resource for high speed cache device, consider ordering at least two or more SSDs. You can add this to your server even after deployment. These drives are the SSD caching drives that can be used to improve the overall performance of the cheap SATA drives from which one has carved out the volume. 

Install MSM for Easy Management of the RAID Card

Once the server is deployed, consider installing AVAGO MegaRAID Storage Manager (MSM) for the OS that has been installed in the server. (You can also perform a remote management of the RAID controller from a local machine by providing the IP of the server where the controller is installed).

Users can directly download MegaRAID Store Manager from the AVAGO website for the installed card in the machine. For the most popular MegaRAID SAS 9361-8i card download the MSM from the AVAGO website here.

How to Create CacheCade - SSD Caching Volumes and Attach to the Volume Drives

Follow these three steps to improve the IOPS on the existing Volumes on the bare metal server.

Step 1: Creating CacheCade Volumes

Once SSDs are deployed on bare metal servers and Regular Volumes are created, users can create a CacheCade volumes to perform SSD Caching. This can be easily achieved by right clicking AVAGO Controller and selecting the Create Cachecade – SSD Caching option.

Create Cachecade

Step 2: Choosing the right RAID Level and Write Policy for CacheCade Volumes

It is recommended to use a RAID 1 SSD Cache Cade Volume. This will eliminate a single point of failure at the SSD device level. This can be done by selecting available SSDs on the system and choosing RAID 1 as the RAID level. Click Add to add all available disks and Create Drive Group. Also, be sure to select Write Back as the Write Policy for increased IO performance for both Read and Writes to a Volume that needs to be cached. 

RAID Level and Write Policy for CacheCade Volumes

Step 3: Enabling SSD Caching For Volumes

If the Virtual Drives were created without SSD caching enabled, then this is the right time to enable them as shown below—selectively enable or disable set of Virtual drives which needs SSD caching.

Right click on the volume and select Enable SSD Caching.

Enable SSD Caching

Performance Comparison

We tried a simple comparison here on a 3.6TB RAID 50 (3 Drive with 2 Spans) volume with and without SSD caching using IOmeter tool (available here). The workload was a 50/50 (Read/Write) 4kb Pure Random IO workload subjected for about an hour on the volumes. 

Without SSD Caching – IOPS 970

Without SSD Caching IOPS 970

With SSD Caching – IOPS 9000 (10X Improvement)

With SSD Caching IOPS 9000 (10X Improvement)

The result shows a 10X IOPS and workload dependent benefit. Results also show how repeatable the Read/Writes are happening with the same LBA.

This could certainly help a database application or IO centric workloads, which are hungry for IOPS, get an instant boost in performance. Try this today at Softlayer, and see the difference!!

-Subramanian 

 

February 3, 2016

Use TShark to see what traffic is passing through your gateway

Many of SoftLayer’s solutions make excellent use of the Brocade vRouter (Vyatta) dedicated security appliance. It’s a true network gateway, router, and firewall for your servers in a SoftLayer data center. It’s also an invaluable trouble-shooting tool should you have a connectivity issue or just want to take a gander at your network traffic. Built into vRouter’s command line and available to you, is a full-fledged terminal-based Wireshark command line implementation—TShark.

TShark is fully implemented in vRouter. If you’re already familiar with using TShark, you know you can call it from the terminal in either configuration or operational mode.  You accomplish this by prefacing a command with sudo; making the full command sudo tshark – flags.

tshark graphic

For those of us less versed in the intricacies of Wireshark and its command line cousin, here are a couple of useful examples to help you out.

One common flag I use in nearly every capture is –i (and as a side note, for those coming from a Microsoft Windows background, the flags are case sensitive). -i is a specific interface on which to capture traffic and immediately helps to cut down on the amount of information unrelated to the problem at hand. If you don’t set this flag, the capture will default to “the first non-loopback address;” or in the case of vRouter on SoftLayer, Bond0. Additionally, if you want to trace a packet and reply, you can set –i any to watch or capture traffic through all the interfaces on the device.

The second flag that I nearly always use to define a capture filter is –f, which defines a filter to match traffic against. The only traffic that matches this pattern will be captured. The filter uses the standard Wireshark syntax. Again, if you’re familiar with Wireshark, you can go nuts; but here are a few of the common filters I frequently use to help you get started:

  • host 8.8.8.8 will match any traffic to or from the specified host. In this case, the venerable Google DNS servers. 
  • net 8.8.8.0/24 works just like host, but for the entire network specified, in case you don’t know the exact host address you are looking for.
  • dst and src are useful if you want to drill down to a specific flow or want to look at just the incoming or outgoing traffic. These filters are usually paired with a host or net to match against.
  • port lets you specify a port to capture traffic, like host and net. Used by itself, port will match both source and destination port. In the case of well-known services, you can also define the port by the common name, i.e., dns.  

One final cool trick with the –f filter is the and and the negation not. They let you combine search terms and specifically exclude traffic in order to create a very finely tuned capture for your needs.

If you want to capture to a file to share with a team or to plug into more advanced analysis tools on another system, the –w flag is your friend. Without -w, the file will behave like a tcpdump and the output will appear in your terminal session. If you want to load the file into Wireshark or another packet analyzer tool you should make sure to add the –F flag to specify the file format. Here is an example:

Vyatta# sudo tshark –i Bond0 –w testcap.pcap –F pcap –f ‘src 10.128.3.5 and not port 80’

The command will capture on Bond0 and output the capture to a .pcap file called testcap.pcap in the root directory of the file system. It will match only traffic on bond0 from 10.128.3.5 that is not source or destination port 22. While that is a bit of a mouthful to explain, it does capture a very well defined stream! 

Here is one more example:

Vyatta#sudo tshark –I any –f ‘host 10.145.23.4 and not ssh’

This command will capture traffic to the terminal that is to or from the specified IP (10.145.23.4) that is not SSH. I frequently use this filter, or one a lot like it, when I am SSHed into a host and want to get a more general idea of what it is doing on the network. I don’t care about ssh because I know the cause of that traffic (me!), but I want to know anything else that’s going to or from the host.

This is all very much the tip of the iceberg; you can find a lot more information at the TShark main page. Hopefully these tips help out next time you want to see just what traffic is passing through your gateway.

- Jeff 

 

January 27, 2016

Sales Primer for Non-Sales Startup Founders

The founder of one of the startups in our Global Entrepreneur Program reached out to me this week. He is ready to start selling his company’s product, but he's never done sales before.

Often, startups consist of a hacker and a hustler—where the tech person is the hacker and the non-tech person is the hustler. In the aforementioned company, there are three hackers. Despite the founder being deeply technical, he is the closest thing they have to a hustler. I'm sure he'll do fine getting in front of customers, but the fact remains that he's never done sales.

So where do you begin as a startup founder if you've never sold before?

Free vs. Paid
His business is B2B, focusing on car dealers. He's worried about facing a few problems, including working with business owners who don’t normally work with startups. He wants to give the product away for free to a few customers to get some momentum, but is worried that after giving it away, he won’t be able to convert them to paying customers.

Getting that first customer is incredibly important, but there needs to be a value exchange. Giving products away for free presents two challenges:

  1. By giving something away, you devalue your product in the eyes of the customer.
  2. The customer has no skin in the game—no incentive to use it or try to make it work.

Occasionally, founders have a very close relationship with a potential customer (e.g., a former manager or a trusted ex-colleague) where they can be assured the product will get used. In those cases, it might be appropriate to give it away, but only for a defined time.

The goal is sales. Paying customers reduce burn and show traction.

Price your product, go to market, and start conversations. Be willing to negotiate to get that first sale. If you do feel strongly about giving it away for free, put milestones and limitations in place for how and when that customer will convert to paid. For example, agree to a three-month free trial that becomes a paid fee in the fourth month. Or tie specific milestones to the payment, such as delivering new product features or achieving objectives for the client.

Build Credibility
When putting a new product in the market, especially one in an industry not enamored with startups and where phrases like “beta access” will net you funny looks, it helps to build credibility. This can be done incrementally. If you don't have customers, start with the conversations you’re having: “We’re currently in conversations with over a dozen companies.”

If you get asked about customers, don’t lie. Don’t even fudge it. I recommend being honest, and framing it by saying, “We’re deciding who we want to work with first. We want to find the right customer who is willing to work closely with us at the early stage. It’s the opportunity to have a deep impact on the future of the product. We're building this for you, after all.”

When you have interest and are in negotiations, you can then mention to other prospective customers that you’re in negotiations with several companies. Be respectful of the companies you’re in negotiations with; I wouldn't recommend mentioning names unless you have explicit permission to do so.

As you gain customers, get their permission to put them on your website. Get quotes from them about the product, and put those on your site and marketing materials. You can even put these in your sales contracts.

Following this method, you can build credibility in the market, show outside interest in your product, and maintain an ethical standing.

Get to No
A common phrase when I was first learning to sell was, “get to the ‘no’.” It has a double meaning: expect that someone is going to say “no” so be ready for it, and keep asking until you get a “no.” For example, if “Are you interested in my product?" gets you a “yes,” then ask, “Would you like to sign up today?”

When you get to no, the next step is to uncover why they said no. At this point, you’re not selling; you’re just trying to understand why the person you’re talking to is saying no. It could be they don't have the decision-making authority, they don't have the budget, they need to see more, or the product is missing something important. The point is, you don’t know, and your goal here is to get to the next step in their process. And you don’t know what that is unless you ask.

Interested in learning more? Dharmesh Shah, co-founder and CTO of Hubspot and creator of the community OnStartups, authored a post with 10 Ideas For Those Critical Early Startup Sales that is well worth reading.

As a founder, you’re the most passionate person about your business and therefore the most qualified to get out and sell. You don't have to be “salesy” to sell; you just need to get out and start conversations.

-Rich

January 22, 2016

Using Cyberduck to Access SoftLayer Object Storage

SoftLayer object storage provides a low cost option to store files in the cloud. There are three primary methods for managing files in SoftLayer object storage: via a web browser, using the object storage API, or using a third-party application. Here, we’ll focus on the third-party application method, demonstrating how to configure Cyberduck to perform file uploads and downloads. Cyberduck is a free and open source (GPL) software package that can be used to connect to FTP, SFTP, WebDAV, S3, or any OpenStack Swift-based object storage such as SoftLayer object storage.

Download and Install Cyberduck

You can download Cyberduck here, with clients for both Windows and Mac. After the installation is complete, download the profile for SoftLayer object storage here. Choose any of the download links under the Connecting section; preconfigured locations won’t matter as the settings will be modified later.

Once the profile has been downloaded, it needs to be modified to allow the hostname to be changed. Open the downloaded file (e.g. Softlayer (Amsterdam).cyberduckprofile) in a text editor. Locate the Hostname Configurable key (<key>Hostname Configurable</key>), and change the XML tag following that from <false/> to <true/>. Once this change has been made, there are two options to load the configuration file: Move the file to the profiles directory where Cyberduck is installed (on Windows this will be C:\Program Files (x86)\Cyberduck\profiles by default), or double-click on the profile, and Cyberduck will add the profile.

Configure Cyberduck to Work with SoftLayer

Now that Cyberduck has been installed, it needs to be configured to connect to object storage in SoftLayer. You can do this by creating a bookmark in Cyberduck. With Cyberduck open, click on Bookmark in the main menu bar, then New Bookmark in the dropdown menu.

In the dropdown box at the top of the Bookmark window, select SoftLayer Object Storage (Name of Location).

In the dropdown box at the top of the Bookmark window, select SoftLayer Object Storage (Name of Location). Depending on the profile that was downloaded, the location may be different. When the SoftLayer profile has been selected, the configurable options for that profile will be displayed. Enter a nickname that will identify the object storage location.

Next, depending on which data center will store the objects, the server option in Cyberduck may need to be changed. To find out which server should be specified, open a web browser and log into the SoftLayer portal. Once in the portal click on Storage then Object Storage. Select the object storage account that will be used for this connection.

If no accounts exist, a new object storage account can be ordered by using the Order Object
Storage link located in the upper right-hand corner. After selecting the account, select the data center where the object storage will reside.

When the Object Storage page loads, there will be a View Credentials link under the object storage container dropdown box in the upper left section of the screen.

Clicking on that link will bring up a dialog box that contains the information necessary for creating a connection in Cyberduck. Because SoftLayer has both public and private networks, there are two authentication endpoints available. The setup for each endpoint is the same, but a VPN connection to the SoftLayer private network is necessary in order to use the private endpoint.

Here, we will be using the public endpoints. Select the server address for the public endpoint (see the blue highlighted text) and enter it into the server text box in Cyberduck.

Next, select the username. It will be in the format:

object_storage_account_name:softlayer_user_name.

Then enter it into the Username text box. (Make note of the API Key, it will be used later.)

Once those options have been set (Nickname, Server, and Username), close the new bookmark window. In the main Cyberduck window, you should see the newly created bookmark listed. Double-click on it to connect to the SoftLayer object storage.

At this point, Cyberduck will prompt for the API key. Use the API key noted above and Cyberduck will connect to SoftLayer object storage. Uploading files can be accomplished by selecting the files and dragging them to the Cyberduck window. Downloading can be accomplished by selecting a file in Cyberduck and dragging it to the local folder where it will be downloaded.

-Bryan Bush

January 8, 2016

A guide to Direct Link connectivity

So you’ve got your infrastructure running on SoftLayer, but you find yourself wishing for a more direct way to connect your on-premises or co-located infrastructure to your SoftLayer cloud infrastructure—with higher bandwidth and lower latency. And you also think the Internet just isn’t good enough when we’re talking VPN tunnels and private networking connectivity. Does that sound like you?

What are my options?

SoftLayer offers three Direct Link products that are specifically for customers looking for the most efficient connection to their SoftLayer private network. A Direct Link enables you to connect to the SoftLayer private network backbone with low latency speeds—up to 10Gbps using fiber cross-connect patches directly into the SoftLayer private network. A Direct Link is used to connect to a SoftLayer private network within the same geographical location of the physical cross-connect. (An add-on is available that enables you to connect to any of your SoftLayer private networks on a global scale.)

Direct Link Network Service Provider


The Direct Link NSP option allows you to create a cross-connect using single-mode fiber from one of our PoP locations onto the SoftLayer private backbone. You’ll have a Network Service Provider of your own preference that provides you with connectivity from your on-prem location to the SoftLayer PoP. This could be an “in-facility” cross-connect to your own equipment, MPLS, Metro WAN, or Fiber provider. The Direct Link NSP is the top-tier connectivity option we offer pertaining to private networking connectivity onto the SoftLayer private backbone.

Direct Link Cloud Exchange Provider


A cloud exchange provider is a carrier/network provider that is already connected to SoftLayer using multi-tenant, high capacity links. This allows you to purchase a virtual circuit at this provider and a Direct Link cloud exchange link at SoftLayer at reduced costs, because the physical connectivity from SoftLayer to the cloud exchange provider is already in place and shared amongst other customers.

Direct Link Colocation Provider


If your gear is co-located in a cabinet purchased via SoftLayer that’s in the same facility near or adjacent to a SoftLayer data center or POD, this option would work for you. Similar to the NSP option, this is a single-mode fiber but there’s no need to connect to a SoftLayer PoP location first—you can connect directly from your cabinet to the relevant SoftLayer data center.

How do you communicate over a Direct Link?

The SoftLayer Direct Link service is a routed Layer 3 service. Routing options are: routing using a SoftLayer assigned subnet, NAT, GRE or IPsec tunnels, VRF, and BGP.

Routing
We directly bind the 172.x.x.x IP block to your remote hosts that need to communicate with your SoftLayer infrastructure. You can either renumber your existing hosts on the remote networks or bind these as secondary IPs and setup appropriate static routes on the host. You can then use the 172.x.x.x IP space to communicate with the 10.x.x.x IP's of your SoftLayer hosts as necessary. Routing via BGP is optional.

NAT
With NAT, SoftLayer will assign you a block of IPs from the 172.16.0.0/12 IP block to NAT into a device from your remote network to prevent IP conflicts with the SoftLayer 10.x.x.x IP range(s) assigned.

GRE / IPsec Tunneling
You can create a GRE or IPSEC tunnel between the remote network and your infrastructure here at SoftLayer. This allows you to use whatever IP space you want on the SoftLayer side and route back across the tunnel to the remote network. With that being said, this is a configuration that will have to be managed and supported by you, independent of SoftLayer. Furthermore, this configuration could break connectivity to the SoftLayer services network if you use a 10.x.x.x block that SoftLayer has in use for services. This solution will also require that each host needing connectivity to the SoftLayer services network and the remote network have two IPs assigned (one from the SL 10.x.x.x block, and one from the remote network block) and static routes setup on the host to ensure traffic is routed appropriately. You will not be able to assign whatever IP space you want directly on the SoftLayer hosts (BYOIP) and have it routable on the SoftLayer network inherently. The only way to do this is as outlined above and is not supported by SoftLayer.

VRF
You can opt-in to utilizing a VRF (Virtual Routing and Forwarding) instance. This allows the customer to either utilize their own remote IP addresses or overlap with a large majority of the SoftLayer infrastructure; however, you must be aware that if you utilize the 10.x.x.x network you still cannot overlap with your hosts within SoftLayer nor within the SoftLayer services network (10.0.0.0/14 and 10.200.0.0/14). You will not be able to set any of the following for your remote prefixes: 10.0.0.0/14, 10.200.0.0/14, 10.198.0.0/15, 169.254.0.0/16, 224.0.0.0/4, and any IP ranges assigned to your VLANs on the SoftLayer platform. When choosing the VRF option, the ability to use SoftLayer VPN services for management of your servers will no longer be possible. Routing via BGP is optional.

Example:

FAQ

Will I need to provide my own cross-connect?
Yes, you will need to order your own cross-connect at your data center of choice—to be connected to the SoftLayer switch port described in the LOA (Letter of Authorization) provided.

What kind of cross-connects are supported?
We strictly use Single Mode Fiber (SMF). We do not accept MMF or Copper.

What is the default size of the remote 172.16.*.* subnet assigned?
Unless otherwise requested, Direct Link customers will be assigned a /24 (256 IPs) subnet.

Which IP block has been reserved for SoftLayer servers on the backend?
We've allocated the entire 10.0.0.0/8 block for use on the SL private network. Specifically, 10.0.0.0/14 has been ear-marked for services. Here’s the full list of service subnets: http://knowledgelayer.softlayer.com/faqs/196#154

Which IP block has been reserved for point-to-point SoftLayer XCR to customer router?
10.254.0.0/16 range. We normally allocate either a /30 or /31 subnet for the point-to-point connection (between our XCR and their equipment on the other end of the Direct Link).

Does Direct Link support jumbo frames?
Yes, just like the private SoftLayer network Direct Link can support up to MTU (Maximum Transmission Unit) 9000-size jumbo frames.

Pricing and locations

A list of available locations and pricing can be found at www.softlayer.com/direct-link.

-Mathijs Dubbe

December 30, 2015

Using Ansible on SoftLayer to Streamline Deployments

Many companies today are leveraging new tools to automate deployments and handle configuration management. Ansible is a great tool that offers flexibility when creating and managing your environments.

SoftLayer has components built within the Ansible codebase, which means continued support for new features as the Ansible project expands. You can conveniently pull your SoftLayer inventory and work with your chosen virtual servers using the Core Ansible Library along with the SoftLayer Inventory Module. Within your inventory list, your virtual servers are grouped by various traits, such as “all virtual servers with 32GB of RAM,” or “all virtual servers with a domain name of softlayer.com.” The inventory list provides different categorized groups that can be expanded upon. With the latest updates to the SoftLayer Inventory Module, you can now get a list of virtual servers by tags, as well as work with private virtual servers. You can then use each of the categories provided by the inventory list within your playbooks.

So, how can you work with the new categories (such as tags) if you don’t yet have any inventory or a deployed infrastructure within SoftLayer? You can use the new SoftLayer module that’s been added to the Ansible Extras Project. This module provides the ability to provision virtual servers within a playbook. All you have to do is supply the build detail information for your virtual server(s) within your playbook and go.

Let’s look at an example playbook. You’ll want to specify a hostname along with a domain name when defining the parameters for your virtual server(s). The hostname can have an incremental number appended at the end of it if you’re provisioning more than one virtual server; e.g., Hostname-1, Hostname-2, and so on. You just need to specify a value True for the parameter increment. Incremental naming offers the ability to uniquely name virtual servers within your playbook, but is also optional in the case where you want similar hostnames. Notice that you can also specify tags for your virtual servers, which is handy when working with your inventory in future playbooks.

Following is a sample playbook for building Ubuntu virtual servers on SoftLayer:

---
- name: Build Tomcat Servers
  hosts: localhost
  gather_facts: False
  tasks:
  - name: Build Servers
    local_action:
      module: softlayer
      quantity: 2
      increment: True
      hostname: www
      domain: test.com
      datacenter: mex01
      tag: tomcat-test
      hourly: True
      private: False
      dedicated: False
      local_disk: True
      cpus: 1
      memory: 1024
      disks: [25]
      os_code: UBUNTU_LATEST
      ssh_keys: [12345]

By default, your playbook will pause until each of your virtual servers completes provisioning before moving onto the next plays within your playbook. You can specify the wait parameter to False if you choose not to wait for the virtual servers to complete provisioning. The wait parameter is helpful for when you want to build many virtual servers, but some have different characteristics such as RAM or tag naming. You can also set the maximum time you want to wait on the virtual servers by setting the wait_timeout parameter, which takes an integer defining the number of seconds to wait.

Once you’re finished using your virtual servers, canceling them is as easy as creating them. Just specify a new playbook step with a state of absent, as well as specifying the virtual server ID or tags to know which virtual servers to cancel.

The following example will cancel all virtual servers on the account with a tag of tomcat-test:

- name: Cancel Servers
  hosts: localhost
  gather_facts: False
  tasks:
  - name: Cancel by tag
    local_action:
      module: softlayer
      state: absent
      tag: tomcat-test

New features are being developed with the core inventory library to bring additional functionality to Ansible on SoftLayer. These new developments can be found by following the Core Ansible Project hosted on Github. You can also follow the Ansible Extras Project for updates to the SoftLayer module.

As of this blog post, the new SoftLayer module is still pending inclusion into the Ansible Extras Project. Click here to check out the current pull request for the latest code and samples.

-Matt

November 4, 2015

Shared, scalable, and resilient storage without SAN

Storage area networks (SAN) are used most often in the enterprise world. In many enterprises, you will see racks filled with these large storage arrays. They are mainly used to provide a centralized storage platform with limited scalability. They require special training to operate, are expensive to purchase, support, or expand, and if those devices fail, there is big trouble.

Some people might say SAN devices are a necessary evil. But are they really necessary? Aren’t there alternatives?

Most startups nowadays are running their services on commodity hardware, with smart software to distribute their content across server farms globally. Current, well established, and successful companies that run websites or apps like Whatsapp, Facebook, or LinkedIn continue to operate pretty much the same way they started. They need the ability to scale and perform at unpredictable rates all around the world, so they use commodity hardware combined with smart software. These types of companies need the features that SAN storage offers them—but with more scalable, global resiliency, and without being centralized or having to buy expensive hardware. But how do they provide server access to the same data, and how do they avoid data loss?

The answer is actually quite simple, although its technology is quite sophisticated: distributed storage.

In a world where virtualization has become a standard for most companies, where even applications and networking are being virtualized, virtualization giant VMware answers this question with Virtual SAN. It effectively eliminates the need for SAN hardware in a VMware environment (and it will also be available for purchase from SoftLayer before the end of the year). Other similar distributed products are GlusterFS (also offered in our QuantaStor solution), Ceph, Microsoft Windows DFS, Hadoop HDFS, document-oriented databases like MongoDB, and many more.

Many solutions, however, vary in maturity. Object storage is a great example of a new type of storage that has come to market, which doesn’t require SAN devices. With SoftLayer, you can and may run them all.

When you have bare metal servers set up as hypervisors or application servers, it’s likely you have a lot of drive bays within those servers, mostly unused. Stuffing them with hard drives and allowing the software to distribute your data across multiple servers in multiple locations with two or three replicas will result in a big, safe, fast, and distributed storage platform. For such a platform, scaling it would be just adding more bare metal servers with even more hard drives and letting the software handle the rest.

Nowadays we are seeing more and more hardware solutions like SAN—or even networking—being replaced with smarter software on simpler and more affordable hardware. At SoftLayer, we offer month-to-month and hourly bare metal servers with up to 36 drive bays, potentially providing a lot of room for storage. With 10Gbps global connectivity options, we offer fast, low latency networking for syncing between servers and delivering data to the customer.

-Mathijs

Subscribe to tips-and-tricks