August 11, 2014

I PLEB Allegiance to My Data!

As a "techy turned marketing turned social media turned compliance turned security turned management" guy, I have had the pleasure of talking to many different customers over the years and have heard horror stories about data loss, data destruction, and data availability. I have also heard great stories about how to protect data and the differing ways to approach data protection.

On a daily basis, I deal with NIST 800-53 rev.4, PCI, HIPAA, CSA, FFIEC, and SOC controls among many others. I also deal with specific customer security worksheets that ask for information about how we (SoftLayer) protect their data in the cloud.

My first response is always, WE DON’T!

The looks I’ve seen on faces in reaction to that response over the years have been priceless. Not just from customers but from auditors’ faces as well.

  • They ask how we back up customer data. We don’t.
  • They ask how we make it redundant. We don’t.
  • They ask how we make it available 99.99 percent of the time. We don’t.

I have to explain to them that SoftLayer is simply infrastructure as a service (IaaS), and we stop there. All other data planning should be done by the customer. OK, you busted me, we do offer managed services as an additional option. We help the customer using that service to configure and protect their data.

We hear from people about Personal Health Information (PHI), credit card data, government data, banking data, insurance data, proprietary information related to code and data structure, and APIs that should be protected with their lives, etc. What is the one running theme? It’s data. And data is data folks, plain and simple!

Photographers want to protect their pictures, chefs want to protect their recipes, grandparents want to protect the pictures of their grandkids, and the Dallas Cowboys want to protect their playbook (not that it is exciting or anything). Data is data, and it should be protected.

So how do you go about doing that? That's where PLEB, the weird acronym in the title of this post, comes in!

PLEB stands for Physical, Logical, Encryption, Backups.

If you take those four topics into consideration when dealing with any type of data, you can limit the risk associated with data loss, destruction, and availability. Let’s look at the details of the four topics:

  • Physical Security—In a cloud model it is on the shoulders of the cloud service provider (CSP) to meet strict requirements of a regulated workload. Your CSP should have robust physical controls in place. They should be SOC2 audited, and you should request the SOC2 report showing little or no exceptions. Think cameras, guards, key card access, bio access, glass alarms, motion detectors, etc. Some, if not all, of these should make your list of must-haves.
  • Logical Access—This is likely a shared control family when dealing with cloud. If the CSP has a portal that can make changes to your systems and the portal has a permissions engine allowing you to add users, then that portion of logical access is a shared control. First, the CSP should protect its portal permission system, while the customer should protect admin access to the portal by creating new privileged users who can make changes to systems. Second, and just as important, when provisioning you must remove the initial credentials setup and add new, private credentials and restrict access accordingly. Note, that it’s strictly a customer control.
  • Encryption—There are many ways to achieve encryption, both at rest and in transit. For data at rest you can use full disk encryption, virtual disk encryption, file or folder encryption, and/or volume encryption. This is required for many regulated workloads and is a great idea for any type of data with personal value. For public data in transit, you should consider SSL or TLS, depending on your needs. For backend connectivity from your place of business, office, or home into your cloud infrastructure, you should consider a secure VPN tunnel for encryption.
  • Backups—I can’t stress enough that backups are not just the right thing to do, they are essential, especially when using IaaS. You want a copy at the CSP you can use if you need to restore quickly. But, you want another copy in a different location upon the chance of a disaster that WILL be out of your control.

So take the PLEB and mitigate risk related to data loss, data destruction, and data availability. Trust me—you will be glad you did.

-@skinman454

August 7, 2014

Deploy or Die

“Forget about being a futurist, become a now-ist.” With those words, Joi Ito, the director of the MIT Media Lab, ends his most recent talk at TED. What thrills me the most is his encouragement to apply agile principles throughout any innovation process, and creating in the moment, building quickly and improving constantly is the story we’ve been advocating at SoftLayer for a long while.

Joi says that this new approach is possible thanks to the Internet. I actually want to take it further. Because the Internet has been around a lot longer than these agile principles, I argue that the real catalyst for the startups and technology disruptors we see nowadays was the widespread, affordable availability of cloud resources. The chance of deploying infrastructure on demand without long-term commitments, anywhere in the world, and with an option to scale it up and down on the fly decreased the cost of innovation dramatically. And fueling that innovation has always been raison d'être of SoftLayer.

Joi compares two innovation models: the before the Internet (I will go ahead and replace “Internet” with “cloud,” which I believe makes the case even stronger) and the new model. The world seemed to be much more structured before the cloud, governed by a certain set of rules and laws. When the cloud happened, it became very complex, low cost, and fast, with Newtonian rules being often defied.

Before, creating something new would cost millions of dollars. The process started with commercial minds, aka MBAs, who’d write a business plan, look for money to support it, and then hire designers and engineers to build the thing. Recently, this MBA-driven model has flipped: first designers and engineers build a thing, then they look for money from VCs or larger organizations, then they write a business plan, and then they move on to hiring MBAs.

A couple of months ago, I started to share this same observation more loudly. In the past, if an organization wanted to bring something new to the market, or just make iteration to the existing offering, it involved a lot of resources, from time, to people, to supporting infrastructure. Only a handful of ideas, after cumbersome fights with processes, budget restrictions, and people (and their egos), got to see the daylight. Change was a luxury.

Nowadays the creators are people who used to be in the shadows, mainly taking instructions from “management” and spinning the hamster wheel they were put on. Now, the “IT crowd” no longer sits in the basements of their offices. They are creating new revenue streams and becoming driving forces within their organizations, or they are rolling out their own businesses as startup founders. There is a whole new breed of technology entrepreneurs thriving on what the cloud offers.

Coming back to the TED talk, Joi brings great examples proving that this new designers/engineers-driven model has pushed innovation to the edges and beyond not only in software development, but also in manufacturing, medicine, and other disciplines. He describes bottom-up innovation as democratic, chaotic, and hard to control, where traditional rules don’t apply anymore. He replaces the demo-or-die motto with a new one: deploy or die, stating that you have to bring something to the real world for it to really count.

He walks us through the principles behind the new way of doing things, and for each of those, without any hesitation, I can add, “and that’s exactly what the cloud enables” as an ending to each statement:

  • Principle 1: Pull Over Push is about pulling the resources from the network as you need them, rather than stocking them in the center and controlling everything. And that’s exactly what the cloud enables.
  • Principle 2: Learning Over Education means drawing conclusions and learning on the go—not from static information, but by experimenting, testing things in real life, playing around with your idea, seeing what comes out of it, and applying the lessons moving forward. And that’s exactly what the cloud enables.
  • Principle 3: Compass Over Maps calls out the high cost of writing a plan or mapping the whole project, as it usually turns out not to be very accurate nor useful in the unpredictable world we live in. It’s better not to plan the whole thing with all the details ahead, but to know the direction you’re headed and leave yourself the freedom of flexibility, to adjust as you go, taking into account the changes resulting from each step. And that’s exactly what the cloud enables.

I dare to say that all the above is the true power of cloud without fluff, leaving you with an easy choice when facing the deploy-or-die dilemma.

- Michalina

August 6, 2014

Healthy Startups: HealthXL Global Gathering in Dublin

We’ve all heard nightmare stories about the health care industry. The combination of insurance companies, health care providers, government regulation, and literal “life and death” situations can make for a contentious environment. And with the outdated policies and procedures that permeate the industry, it’s a perfect opportunity for innovation.

When I met Martin Kelly of HealthXL a few months ago, I was intrigued by what he was building. He saw the need for innovation in health care, and he started looking around for the startups that were focusing on these kinds of issues. And while he encountered several groups with a health care focus, no one really took the lead to connect them all together to collaborate or strategize about how startups can really change health care. I mean REALLY change it.

Martin, a former IBMer, is super-passionate about innovation in technology for the health care industry, so he leveraged the IBM network and the relationships he built during his time at IBM to address a few simple questions:

  • What needs to happen in health care, through technology, to make the experience and the system better for us all?
  • What is the moonshot that needs to happen for true innovation to happen?

The group he brought together consisted of experts from enterprise companies like the Cleveland Clinic, ResMed, and Johnson & Johnson as well as startup influencers in the health care community like Aussie Jason Berek-Lewis of HealthyStartups and Silicon Valley Bank.

And when those different viewpoints came together, he realized the questions weren’t quite as “simple” as he expected.

Martin invited me to join the conversation for three days at the HealthXL Global Gathering in Dublin to hear what global leaders in the industry are saying about health care. And boy … was I surprised.

To their credit, these leaders (and their respective companies) are very willing and capable to innovate. They feel the pain of heavy administrative responsibilities, often involving duplication and triplication of work. They know how hard it is to track patients from different systems as they change jobs, insurance companies, and providers. They struggle with not being able to communicate effectively with insurance providers. And they fully understand how over-commoditized health care has become as well as its decentralization of focus from patients.

The bottom line: They feel the pain of not having the right technology to run more efficient, cost-effective, and patient-centered health care businesses. They’ve seen the finance industry integrate technology over the past few years, but they're somewhat unsure of what that could look like for them. This can only mean that there are huge opportunities for startups and innovative technologies.

I couldn’t help but consider of how nicely these conversations fit in with the Sprint Mobile Health Accelerator powered by our friends at TechStars that @andy_mui and I visited in March. The conversations inside that accelerator are the missing pieces to the conversations that companies like the Cleveland Clinic and Johnson & Johnson were having. Those enterprises have the opportunity to invest in early stage entrepreneurs and born-on-the-Web startups to incubate technologies and solutions that would prove in time to make their businesses more profitable and efficient.

But the biggest opportunity is what that means for patients.

The most telling story to play out over the next 10 years will be whether the largest health care providers and other businesses will approach these market opportunities in pursuit of cultivating a health care system that prioritizes patients. After hearing the conversation at the HealthXL accelerator global summit, that’s the ultimate challenge.

The startup ecosystem is full of entrepreneurs and teams that can deliver on the goal of improving health care while secondarily (and in some cases indirectly) improving the way heath care businesses run. These efficiencies will result in MORE clients, customers, partners, and profitability in the end, but they may require some hefty changes at the outset. Will the industry allow itself to admit what it doesn’t know?

I am excited to see where this goes. In a few years, I think we’re going to consider Martin Kelly as a key builder of this movement, and more and more businesses will be turning to him for answers to the most important of all questions: “How do we do this?”

We’re excited to be able to support Martin and all of the health care startups in the marketplace today. What will the future of health care look like when these innovators and entrepreneurs are done with it?

The possibilities are endless.

-@JoshuaKrammes

July 16, 2014

Vyatta Gateway Appliance vs Vyatta Network OS

I hear this question almost daily: “What’s the difference between the Vyatta Network OS offered by SoftLayer and the SoftLayer Vyatta Gateway Appliance?” The honest answer is, from a software perspective, nothing. However from a deployment perspective, there are a couple fundamental differences.

Vyatta Network OS on the SoftLayer Platform

SoftLayer offers customers the ability to spin up different bare metal or virtual server configurations, and choose either the community or subscription edition of the Vyatta Network operating system. The server is deployed like any other host on the SoftLayer platform with a public and private interface placed in the VLANs selected while ordering. Once online, you can route traffic through the Vyatta Network server by changing the default gateway on your hosts to use the Vyatta Network server IP rather than the default gateway. You have the option to configure ingress and egress ACLs for your bare metal or virtual servers that route through the Vyatta Network server. The Vyatta Network server can also be configured as a VPN end point to terminate Internet Protocol Security (IPSEC), Generic Routing Encapsulation (GRE), or OpenSSL VPN connections, and securely connect to the SoftLayer Private Network. Sounds great right?

So, how is a Vyatta Network OS server different from a SoftLayer Vyatta Gateway Appliance?

A True Gateway

While it’s true that the Vyatta Gateway Appliance has the same functionality as a server running the Vyatta Network operating system, one of the primary differences is that the Vyatta Gateway Appliance is delivered as a true gateway. You may be asking yourself what that means. It means that the Vyatta Gateway Appliance is the only entry and exit point for traffic on VLANs you associate with it. When you place an order for the Vyatta Gateway Appliance and select your public and private VLANs, the Vyatta Gateway Appliance comes online with its native VLAN for its public and private interfaces in a transit VLAN. The VLANs you selected are trunked to the gateway appliance’s public and private interfaces via an 802.1q trunk setup on the server’s interface switch ports. These VLANs will show up in the customer portal as associated VLANs for the Vyatta Gateway Appliance.

This configuration allows SoftLayer to create an outside, unprotected interface (in the transit VLAN) and an inside, protected interface (on your bare metal server or virtual server VLAN). As part of the configuration, we set up SoftLayer routers to static route all IP space that belongs to the associated VLANs to the Vyatta Gateway Appliance transit VLAN IP address. The servers you have in a VLAN associated with gateway appliance can no longer use the SoftLayer default gateway to route in and out of the VLAN. All traffic must be filtered through the Gateway Appliance, making it a true gateway.

This differs from a server deployed with the Vyatta Network OS because hosts behind the Vyatta Network OS server can route around it by simply changing their default gateway back to the SoftLayer default gateway.

N-Tier Architecture

Another difference is that the gateway appliance gives customers the option to route multiple public and private VLANs in the same pod (delineated by an FCR/BCR pair) through the device. This allows you to use the gateway appliance to create granular segmentation between different VLANs within your environment, and set up a traditional tiered infrastructure environment with ingress and egress rules between the tiers.

A server running Vyatta Network OS cannot be configured this way. The Vyatta Network OS server is placed in a single public and private VLAN, and there is no option to associate different VLANs with the server.

I hope this helps clear up the confusion around Vyatta on the SoftLayer platform. As always, if you have any questions or concerns about any of SoftLayer’s products or services, the sales and sales engineering teams are happy to help.

-Kelly

July 14, 2014

London Just Got Cloudier—LON02 is LIVE!

Summer at SoftLayer is off to a great start. As of today, customers can order SoftLayer servers in our new London data center! This facility is SoftLayer's second data center in Europe (joining Amsterdam in the region), and it's one of the most anticipated facilities we've ever opened.

London is the second SoftLayer data center to go live this year, following last month's data center launch in Hong Kong. In January, IBM committed to investing $1.2 billion to expand our cloud footprint, and it's been humbling and thrilling at the same time to prepare for all of this growth. And this is just the beginning.

When it comes to the Europe, Middle East, and Africa region (EMEA), SoftLayer's largest customer base is in the U.K. For the last two and a half years I’ve been visiting London quite frequently, and I've met hundreds of customers who are ecstatic to finally have a SoftLayer data center in their own backyard. As such, I'm especially excited about this launch. With this data center launch, they get our global platform with a local address.

The SoftLayer Network

Customers with location-sensitive workloads can have their data reside within the U.K. Customers with infrastructure in Amsterdam can use London to add in-region redundancy to their environments. And businesses that target London's hyper-competitive markets can deliver unbelievable performance to their users. LON02 is fully integrated with the entire SoftLayer platform, so bare metal and virtual servers in the new data center are seamlessly connected to servers in every other SoftLayer data center around the world. As an example of what that means in practice, you can replicate or integrate data between servers in London and Amsterdam data centers with stunning transfer speeds. For free. You can run your databases on bare metal in London, keep backups in Amsterdam, spin up virtual servers in Asia and the U.S. And your end users get consistent, reliable performance—as though the servers were in the same rack. Try beating that!

London is a vibrant, dynamic, and invigorating city. It's consistently voted one of the best places for business in the region. It's considered a springboard for Europe, attracting more foreign investors than any other location in the region. A third of world’s largest companies are headquartered in London, and with our new data center, we're able to serve them even more directly. London is also the biggest tech hub in-region and the biggest incubator for technology startups and entrepreneurs in Europe. These cloud-native organizations have been pushing the frontiers of technology, building their businesses on our Internet-scale platform for years, so we're giving them an even bigger sandbox to play in. My colleagues from Catalyst, our startup program, have established solid partnerships with organizations such as Techstars, Seedcamp and Wayra UK, so (as you can imagine) this news is already making waves in the U.K. startup universe.

For me, London will always be the European capitol of marketing and advertising (and a strong contender for the top spot in the global market). In fact, two thirds of international advertising agencies have their European headquarters in London, and the city boasts the highest density of creative firms of any other city or region in the world. Because digital marketing and advertising use cases are some of the most demanding technological workloads, we're focused on meeting the needs of this market. These customers require speed, performance, and global reach, and we deliver. Can you imagine RTB (real-time-bidding) with network lag? An ad pool for multinationals that is accessible in one region, but not so much in another? A live HD digital broadcast to run on shared, low-I/O machines? Or a 3D graphic rendering based on a purely virtualized environment? Just thinking about those scenarios makes me cringe, and it reinforces my excitement for our new data center in London.

MobFox, a customer who happens to be the largest mobile ad platform in Europe and in the top five globally, shares my enthusiasm. MobFox operates more than 150 billion impressions per month for clients including Nike, Heineken, EA, eBay, BMW, Netflix, Expedia, and McDonalds (as a comparison I was told that Twitter does about 7 billion+ a month). Julian Zehetmayr, the brilliant 23-year-old CEO of MobFox, agreed that London is a key location for businesses operating in digital advertising space and expressed his excitement about the opportunity we’re bringing his company.

I could go on and on about why this news is soooo good. But instead, I'll let you experience it yourself. Order bare metal or virtual servers in London, and save $500 on your first month service.

Celebrate a cloudy summer in London!

-Michalina

July 1, 2014

The Cloud in 100 Years

Today’s cloud is still in its infancy, with less than 10 years under its belt, yet it has produced some of the most advanced products and solutions known to date. Cloud, in fact, has helped change how the world connects by making information, current events, and communication available globally, at the speed of light.

The Internet itself was born in the 1960s and in just 44 years, look at what it has accomplished! Websites like Google, Bing, and Yahoo provide up-to-the-second information that is reinventing and replacing the role dictionaries and encyclopedias once played. Facebook, Twitter, and Instagram are revolutionizing how most of the world communicates. WordPress, Tumblr, and bloggers give voices to many journalist and writers who were once only heard by few, if any. It is truly a new landscape today. Do you think when Herman Hollerith thought he invented the punch card in the 1890s that it would evolve data processing to “the cloud” in just 100 years? IBM 100 explains:

One could argue that the information age began with the punch card, and that data processing as a transformational technology began with its 1928 redesign by IBM. This thin piece of cardboard, with 80 columns of tiny rectangular holes made the world quantifiable. It allowed data to be recorded, stored, and analyzed. For nearly 50 years, it remained the primary vehicle for processing the essential facts and figures that comprised countless industries, in every corner of the globe. (IBM 100)

What about the future?

It’s obvious that predicting 10 decades into the future is a difficult task, but one thing is for sure, this cloud thing is just getting started.

  • What will we call it? The Internet/World Wide Web is now almost synonymous with the term cloud. I predict that in the next 20 years it will take on another name. Something even more nebulous than the cloud … maybe even “The Nebula.” Or … quite possibly, Skynet!
  • How will it be accessed? In 100 years, I think the more fitting question will be, “how will you hide from it?” Today, we are voluntarily connected with our smart phones. You can be found and contacted using varying mediums from a single, handheld device. FaceTime, WhatsApp, Skype, Tango … you name it. You can make video calls to people halfway around the world in seconds. If Moore’s law still applies in 100 years, our devices could potentially be 50 times smaller than what they are today.
  • Ultimate Control: Nanotechnology will have the ability to control the weather and not only determine if we will have rain but regulate it. Weather control could rid the world of drought and make uninhabitable areas of the world flourish.
  • Medicine: The term “antibiotics” will take on a whole new meaning for medicine in 100 years. Imagine instead of getting a shot of penicillin, you receive 50mL of microscopic robots that can attack the virus directly, from within. The robots then send a push notification to your ‘iPhone 47S’ notifying you that your flu bug has been located and irradiated and that you can press “OK” to send the final report to your physician. The Magic School Bus finally becomes a reality!

Without a doubt, cloud services will be everywhere in the future. The change is already taking place with early adopters and businesses. In the 10 years since the industry coined the term cloud, it’s become a birthplace for technology and industry disruptive behavior. This has caught the attention of the traditional IT organizations as a way to save capital, lower time to market, and increase research and development on their own products and services.

SoftLayer is dedicated to helping the transformation of mid-market and enterprise companies alike. We understand that the cloud is virtually making this world smaller as companies reach into markets that were once out of reach; which is why we’re in the process of doubling our data center footprint to reach those unreachable areas of the world. Don’t be surprised when we announce our first data center on the moon!

-Harold

Categories: 
June 30, 2014

OpenNebula 4.8: SoftLayer Integration

In the next month, the team of talented developers at C12G Labs will be rolling out OpenNebula 4.8, and in that release, they will be adding integration with SoftLayer! If you aren't familiar with OpenNebula, it's a full-featured open-source platform designed to bring simplicity to managing private and hybrid cloud environments. Using a combination of existing virtualization technologies with advanced features for multi-tenancy, automatic provisioning, and elasticity, OpenNebula is driven to meet the real needs of sysadmins and devops.

In OpenNebula 4.8, users can quickly and seamlessly provision and manage SoftLayer cloud infrastructure through OpenNebula's simple, flexible interface. From a single pane of glass, you can create virtual data center environments, configure and adjust cloud resources, and automatic execution and scaling of multi-tiered applications. If you don't want to leave the command line, you can access the same functionality from a powerful CLI tool or through the OpenNebula API.

When the C12G Labs team approached us with the opportunity to be featured in the next release of their platform, several folks from the office were happy to contribute their time to make the integration as seamless as possible. Some of our largest customers have already begun using OpenNebula to manage their hybrid cloud environments, so official support for the SoftLayer cloud in OpenNebula is a huge benefit to them (and to us). The result of this collaboration will be released under the Apache license, and as such, it will be freely available to the public.

To give you an idea of how easy OpenNebula is to use, they created an animated GIF to show the process of creating and powering down virtual machines, creating a server image, and managing account settings:

OpenNebula

We'd like to give a big shout-out to the C12G Labs team for all of the great work they've done on the newest version of OpenNebula, and we look forward to seeing how the platform continues to grow and improve in the future.

-@khazard

Categories: 
June 9, 2014

Visualizing a SoftLayer Billing Order

In my time spent as a data and object modeler, I’ve dealt with both good and bad examples of model visualization. As an IBMer through the Rational acquisition, I have been using modeling tools for a long time. I can appreciate a nice diagram shining a ray of light on an object structure, and abhor a behemoth spaghetti diagram.

When I started studying SoftLayer’s API documentation, I saw both the relational and hierarchical nature of SoftLayer’s concept model. The naming convention of API services and data types embodies their hierarchical structure. While reading about “relational properties” in data types, I thought it would be helpful to see diagrams showing relationships between services and data types versus clicking through reference pages. After all, diagramming data models is a valuable complement to verbal descriptions.

One way people can deal with complex data models is to digest them a little at a time. I can’t imagine a complete data model diagram of SoftLayer’s cloud offering, but I can try to visualize small portions of it. In this spirit, after reviewing article and blog entries on creating product orders using SoftLayer’s API, I drew an E-R diagram, using IBM Rational Software Architect, of basic order elements.

The diagram, Figure 1, should help people understand data entities involved in creating SoftLayer product orders and the relationships among the entities. In particular, IBM Business Partners implementing custom re-branded portals to support the ordering of SoftLayer resources will benefit from visualization of the data model. Picture this!

Figure 1. Diagram of the SoftLayer Billing Order

A user account can have many associated billing orders, which are composed of billing order items. Billing order items can contain multiple order containers that hold a product package. Each package can have several configurations including product item categories. They can be composed of product items with each item having several possible prices.

-Andrew

Andrew Hoppe, Ph.D., is a Worldwide Channel Solutions Architect for SoftLayer, an IBM Company.

June 5, 2014

Sysadmin Tips and Tricks - Understanding the 'Default Deny' Server Security Principle

In the desktop world, people tend to feel good about their system’s security when they have the latest anti-virus and anti-spyware installed and keep their applications up-to-date. Those of us who compute for a living know that this is nothing close to resembling a “secure” state. But it’s the best option for non-technical people at this time.

Servers, on the other hand, exist in a more hostile environment than desktop machines, which is why keeping them secure requires skilled professionals. This means not only doing things like keeping applications patched and up-to-date, but also grasping the underlying principles of system security. Doing that allows us to make informed and skillful decisions for our unique systems—because no one knows our servers as well as we do.

One very powerful concept is “Default Deny” (as in Deny by Default), by which means that "Everything, not explicitly permitted, is forbidden." What does this mean, and why is it important?

Let’s look at a simple example using file permissions. Let’s say you installed a CGI (Common Gateway Interface) application, such as some blog software, and you’re having trouble getting it to work. You’ve decided the problem is the permissions on the configuration file. In this case, user “rasto” is the owner of the file. You try chmodding it 755 and it works like this:

-rwxr-xr-x 1 rasto rasto 216 May 27 16:11 configuration.ini

Now that it works, you’re ready to move to your next project. But there’s a possible security problem here. As you can see, you have left the configuration file Readable and Executable by Other. There is almost certainly no reason for that because CGI scripts are typically run as the owner of the file. There is potentially no reason for users of the same group (or other random users of the system) to be able to Read this configuration file. After all, some configuration files contain database passwords. If I have access to another user on this system, I could simply “cat” the configuration file and get trivial access to your data!

So the trick is to find the least permissions required to run this script. With a little work, you may discover that it runs just fine with 700:

-rwx------ 1 rasto rasto 216 May 27 16:11 configuration.ini

By taking a little extra time, you have made your system much more secure. “Default Deny” means deny everything that is not explicitly required. The beautiful thing about this policy is that you can remove vulnerabilities that you neither comprehend nor even know about. Instead of making a list of “bad” things you essentially make a list of “good” things, and allow only those things to happen. You don’t even have to realize that someone could read the file because you’ve made it a policy to always allow the least amount of access possible to all things.

Another example might be to prune your php.ini to get rid of any expanded capabilities not required by PHP scripts running on your system. If a zero-day vulnerability arises in PHP that affects one of the things you’ve disallowed, it simply won’t affect you because you’ve disabled it by default.

Another scenario might be to remove packages from your system that are not being used. If someone tries to inject some Ruby code into your system, it won’t run without Ruby present. If you’re not using it, get rid of it, and it can’t be used against you.

Note: It’s very easy to be wrong about what is not being used on your system—you can definitely break things this way—I suggest a go-slow approach, particularly in regards to built-in packages.

The important thing is the concept. I hope you can see now why a Default Deny policy is a powerful tool to help keep your system more secure.

-Lee

June 3, 2014

My 5 Favorite Sublime Text 2 Plugins

I can’t believe that is was only a mere year ago since I learned of Sublime Text 2. I know, I know … where have I been? What kind of developer was I that I didn’t even know of Sublime Text? I’ll take the criticism, as I can honestly say it has been the best text editor I have ever used.

It’s extremely fast. I rarely wait for saves, uploads, or syntax highlighting, it keeps up with everything I do and allows me to develop directly from the keyboard. I hardly ever reach for my mouse!

It looks awesome. It has kind of retro-look for those developers who remember coding purely from the terminal or DOS. It really brings back memories.

It can be extended. Need some extra functionality that doesn’t come out-of-the-box? Sublime Text 2 has a range of available plugins that you can install to enhance your capabilities with this awesome text editor. In this blog, I’ll cover my top five favorite plugins of all time, what they do, and why they’re great!

1. BracketHighlighter

Many people believe that bracket highlighting should be a ready-made helper for developers of all languages. I agree on this point, however, at least Sublime Text 2 provides a plugin for this. It’s a very simple addition; it allows you to see if your opening brackets have an accompanying closing bracket. Many developers will tell you stories of these large and complex programs that consumed much of their time as they searched for one simple error … only to find that it was just a missing closing bracket.

In addition, it highlights opening and closing tags and quotes, for those of you who do a lot of HTML/XHTML, both bracket and tag settings are customizable.

For more details on the plugin check out the BracketHighlighter GitHub page.

2. DocBlockr

This is a neat plugin that speeds up and simplifies documentation. It supports PHP, Javascript, Java, Action Script, Objective C, C, C++, and Coffee Script.

By typing this:

/** (Press Enter)

The plugin automatically returns this:

/**
*
*/

Boom, the quickest way to document that I’ve ever seen.

In order to document your functions, just put it in your comment:

/** (Press Enter)
function myFunction(var1, var2) { }

And, it'll become:

/**
*[myFunction description]
* @param {[type]} var1 [description]
* @param {[type]} var2 [description]
* @param {[type]}
*/

function myFunction (var1, var2) { }

When you want to do variable documentation, the structure is similar:

/** (Press Enter)
myVar = 10

The plugin will fill out the documentation block like this:

/**
*[myVar description]
* @type {Number}
*/

Tell me that this isn’t nifty! If you want to try it out or just get a closer look at this plugin, head here.

3. Emmet (previously known as Zen Coding)

Unfortunately, I encountered some oddities when I tried to install Emmet with SublimeLinter, so I decided to disable the Linter in favor of Emmet to give it a spin. I absolutely love Emmet.

It provides a much more efficient way to code by providing what they call “abbreviations.” For example, if I want to create a div with an unordered list and one bullet point in it, Emmet lets me save myself a lot of time ... I can type this into Sublime:

div>ul>li

And press Control+E, and my code automatically turns into this:

<div>
    <ul>
        <li></li>
    </ul>
</div>

If I need to add multiple <li> tags, I can easily replicate them with a small addition:

div>ul>li*3

When I hit Control+E, voila! The unordered list structure is quickly generated:

<div>
    <ul>
        <li></li> 
        <li></li>
        <li></li>
    </ul>
</div>

That's just the tip of the iceberg when it comes to Emmet's functionality, and if you’re as impressed as I am, you should check out their site: http://docs.emmet.io/

4. SFTP

I think the title of the plugin says it all. It allows you to directly connect to your server and sync projects and files just by saving. You will never have to edit a file in a text editor, open your FTP client and upload the file again. Now, you can do it directly from Sublime Text 2.

When used in conjunction with Projects, you’ll find that you can easily save hours of time spent on remote uploading. By far, SFTP for Sublime Projects is one of the most essential plugins you’ll need for any project!

5. SideBarEnhancements

This is a small plugin that makes minor adjustments to the Files and Folders sidebar, providing a more intuitive interface. Though this doesn’t add much functionality, it can definitely speed things up. Take a look at the plugin on the SideBarEnhancements GitHub page

I hope this list of Sublime Text 2 plugins will enhance your capabilities and ease up your processes, as it has done for me. Give them a try and let me know what you think. Also, if you have a different favorite plugin, I’d love to hear about it.

-Cassandra

Subscribe to