Posts Tagged 'Rules'

September 24, 2013

Four Rules for Better Code Documentation

Last month, Jeremy shared some valuable information regarding technical debt on SLDN. In his post, he discussed how omitting pertinent information when you're developing for a project can cause more work to build up in the future. One of the most common areas developers overlook when it comes to technical debt is documentation. This oversight comes in two forms: A complete omission of any documentation and inadequate information when documentation does exist. Simply documenting the functionality of your code is a great start, but the best way to close the information gap and avoid technical debt that stems from documentation (or lack thereof) is to follow four simple rules.

1. Know Your Audience

When we're talking about code, it's safe to say you'll have a fairly technical audience; however, it is important to note the level of understanding your audience has on the code itself. While they should be able to grasp common terms and development concepts, they may be unfamiliar with the functionality you are programming. Because of this, it's a good idea to provide a link to an internal, technical knowledgebase or wiki that will provide in-depth details on the functionality of the technology they'll be working with. We try to use a combination of internal and external references that we think will provide the most knowledge to developers who may be looking at our code. Here's an example of that from our Dns_Domain class:

 * @SLDNDocumentation Service Overview <<< EOT
 * SoftLayer customers have the option of hosting DNS domains on the SoftLayer
 * name servers. Individual domains hosted on the SoftLayer name servers are
 * handled through the SoftLayer_Dns_Domain service.
 *
 * Domain changes are applied automatically by our nameservers, but changes may
 * not be received by the other name servers on the Internet for 72 hours after
 * your change. The SoftLayer_Dns_Domain service does not apply to customers who
 * run their own nameservers on servers purchased from SoftLayer.
 *
 * SoftLayer provides secondary DNS hosting services if you wish to maintain DNS
 * records on your name server, but have records replicated on SoftLayer's name
 * servers. Use the [[SoftLayer_Dns_Secondary]] service to manage secondary DNS
 * zones and transfers.
 * EOT
 *
 * @SLDNDocumentation Service External Link http://en.wikipedia.org/wiki/Domain_name_system Domain Name System at Wikipedia
 * @SLDNDocumentation Service External Link http://tools.ietf.org/html/rfc1035 RFC1035: Domain Names - Implementation and Specification at ietf.org
 * @SLDNDocumentation Service See Also SoftLayer_Dns_Domain_ResourceRecord
 * @SLDNDocumentation Service See Also SoftLayer_Dns_Domain_Reverse
 * @SLDNDocumentation Service See Also SoftLayer_Dns_Secondary
 *

Enabling the user to learn more about a topic, product, or even a specific call alleviates the need for users to ask multiple questions regarding the "what" or "why" and will also minimize the need for you to explain more basic concepts regarding the technology supported by your code.

2. Be Consistent - Terminology

There are two main areas developers should focus on when it comes to consistency: Formatting and terminology.

Luckily, formatting is pretty simple. Most languages have a set of standards attached to them that extend to the Docblock, which is where the documentation portion of the code normally takes place. Docblocks can be used to provide an overview of the class, identify authors or product owners and provide additional reference to those using the code. The example below uses PHP's standards for documentation tagging and allows users to quickly identify the parameters and return value for the createObject method in the Dns_Domain class:

*
     * @param string $objectType
     * @param object $templateObject
     *
     * @return SoftLayer_Dns_Domain
     */
   public static function createObject($objectType = __CLASS__, $templateObject)

Keeping consistent when it comes to terminology is a bit more difficult; especially if there have been no standards in place before. As an example, we can look to one of the most common elements of hosting: the server. Some people call this a "box," a "physical instance" or simply "hardware." The server may be a name server, a mail server, a database server or a web server.

If your company has adopted a term, use that term. If they haven't, decide on a term with your coworkers and stick to it. It's important to be as specific as possible in your documentation to avoid any confusion, and when you adopt specific terms in your documentation, you'll also find that this consistency will carry over into conversations and meetings. As a result, training new team members on your code will go more smoothly, and it will be easier for other people to assist in maintaining your code's documentation.

Bonus: It's much easier to search and replace when you only have to search for one term.

3. Forget What You Know About Your Code ... But Only Temporarily

Regardless of the industry, people who write their own documentation tend to omit pertinent information about the topic. When I train technical writers, I use the peanut butter and jelly example: How would you explain the process of making a peanut butter and jelly sandwich? Many would-be instructors omit things that would result in a very poorly made sandwich ... if one could be made at all. If you don't tell the reader to get the jelly from the cupboard, how can they put jelly on the sandwich? It's important to ask yourself when writing, "Is there anything that I take for granted about this piece of code that other users might need or want to know?"

Think about a coding example where a method calls one or more methods automatically in order to do its job or a method acts like another method. In our API, the createObjects method uses the logic of the createObject method that we just discussed. While some developers may pick up on the connection based on the method's name, it is still important to reference the similarities so they can better understand exactly how the code works. We do this in two ways: First, we state that createObjects follows the logic of createObject in the overview. Second, we note that createObject is a related method. The code below shows exactly how we've implemented this:

     * @SLDNDocumentation Service Description Create multiple domains at once.
     *
     * @SLDNDocumentation Method Overview <<< EOT
     * Create multiple domains on the SoftLayer name servers. Each domain record
     * passed to ''createObjects'' follows the logic in the SoftLayer_Dns_Domain
     * ''createObject'' method.
     * EOT
     *
     * @SLDNDocumentation Method Associated Method SoftLayer_Dns_Domain::createObject

4. Peer Review

The last rule, and one that should not be skipped, is to always have a peer look over your documentation. There really isn't a lot of depth behind this one. In Development, we try to peer review documentation during the code review process. If new content is written during code changes or additions, developers can add content reviewers, who have the ability to add notes with revisions, suggestions and questions. Once all parties are satisfied with the outcome, we close out the review in the system and the content is updated in the next code release. With peer review of documentation, you'll catch typos, inconsistencies and gaps. It always helps to have a second set of eyes before your content hits its users.

Writing better documentation really is that easy: Know your audience, be consistent, don't take your knowledge for granted, and use the peer review process. I put these four rules into practice every day as a technical writer at SoftLayer, and they make my life so much easier. By following these rules, you'll have better documentation for your users and will hopefully eliminate some of that pesky technical debt.

Go, and create better documentation!

-Sarah

May 10, 2013

Understanding and Implementing Coding Standards

Coding standards provide a consistent framework for development within a project and across projects in an organization. A dozen programmers can complete a simple project in a dozen different ways by using unique coding methodologies and styles, so I like to think of coding standards as the "rules of the road" for developers.

When you're driving in a car, traffic is controlled by "standards" such as lanes, stoplights, yield signs and laws that set expectations around how you should drive. When you take a road trip to a different state, the stoplights might be hung horizontally instead of vertically or you'll see subtle variations in signage, but because you're familiar with the rules of the road, you're comfortable with the mechanics of driving in this new place. Coding standards help control development traffic and provide the consistency programmers need to work comfortably with a team across projects. The problem with allowing developers to apply their own unique coding styles to a project is the same as allowing drivers to drive as they wish ... Confusion about lane usage, safe passage through intersections and speed would result in collisions and bottlenecks.

Coding standards often seem restrictive or laborious when a development team starts considering their adoption, but they don't have to be ... They can be implemented methodically to improve the team's efficiency and consistency over time, and they can be as simple as establishing that all instantiations of an object must be referenced with a variable name that begins with a capital letter:

$User = new User();

While that example may seem overly simplistic, it actually makes life a lot easier for all of the developers on a given project. Regardless of who created that variable, every other developer can see the difference between a variable that holds data and one that are instantiates an object. Think about the shapes of signs you encounter while driving ... You know what a stop sign looks like without reading the word "STOP" on it, so when you see a red octagon (in the United States, at least), you know what to do when you approach it in your car. Seeing a capitalized variable name would tell us about its function.

The example I gave of capitalizing instantiated objects is just an example. When it comes to coding standards, the most effective rules your team can incorporate are the ones that make the most sense to you. While there are a few best practices in terms of formatting and commenting in code, the most important characteristics of coding standards for a given team is consistency and clarity.

So how do you go about creating a coding standard? Most developers dislike doing unnecessary work, so the easiest way to create a coding standard is to use an already-existing one. Take a look at any libraries or frameworks you are using in your current project. Do they use any coding standards? Are those coding standards something you can live with or use as a starting point? You are free to make any changes to it you wish in order to best facilitate your team's needs, and you can even set how strict specific coding standards must be adhered to. Take for example left-hand comparisons:

if ( $a == 12 ) {} // right-hand comparison
if ( 12 == $a ) {} // left-hand comparison

Both of these statements are valid but one may be preferred over the other. Consider the following statements:

if ( $a = 12 ) {} // supposed to be a right-hand comparison but is now an assignment
if ( 12 = $a ) {} // supposed to be a left-hand comparison but is now an assignment

The first statement will now evaluate to true due to $a being assigned the value of 12 which will then cause the code within the if-statement to execute (which is not the desired result). The second statement will cause an error, therefore making it obvious a mistake in the code has occurred. Because our team couldn't come to a consensus, we decided to allow both of these standards ... Either of these two formats are acceptable and they'll both pass code review, but they are the only two acceptable variants. Code that deviates from those two formats would fail code review and would not be allowed in the code base.

Coding standards play an important role in efficient development of a project when you have several programmers working on the same code. By adopting coding standards and following them, you'll avoid a free-for-all in your code base, and you'll be able to look at every line of code and know more about what that line is telling you than what the literal code is telling you ... just like seeing a red octagon posted on the side of the road at an intersection.

-@SoftLayerDevs

April 29, 2013

Web Development - Installing mod_security with OWASP

You want to secure your web application, but you don't know where to start. A number of open-source resources and modules exist, but that variety is more intimidating than it is liberating. If you're going to take the time to implement application security, you don't want to put your eggs in the wrong basket, so you wind up suffering from analysis paralysis as you compare all of the options. You want a powerful, flexible security solution that isn't overly complex, so to save you the headache of making the decision, I'll make it for you: Start with mod_security and OWASP.

ModSecurity (mod_security) is an open-source Apache module that acts as a web application firewall. It is used to help protect your server (and websites) from several methods of attack, most common being brute force. You can think of mod_security as an invisible layer that separates users and the content on your server, quietly monitoring HTTP traffic and other interactions. It's easy to understand and simple to implement.

The challenge is that without some advanced configuration, mod_security isn't very functional, and that advanced configuration can get complex pretty quickly. You need to determine and set additional rules so that mod_security knows how to respond when approached with a potential threat. That's where Open Web Application Security Project (OWASP) comes in. You can think of the OWASP as an enhanced core ruleset that the mod_security module will follow to prevent attacks on your server.

The process of getting started with mod_security and OWASP might seem like a lot of work, but it's actually quite simple. Let's look at the installation and configuration process in a CentOS environment. First, we want to install the dependencies that mod_security needs:

## Install the GCC compiler and mod_security dependencies ##
$ sudo yum install gcc make
$ sudo yum install libxml2 libxml2-devel httpd-devel pcre-devel curl-devel

Now that we have the dependencies in place, let's install mod_security. Unfortunately, there is no yum for mod_security because it is not a maintained package, so you'll have to install it directly from the source:

## Get mod_security from its source ##
$ cd /usr/src
$ git clone https://github.com/SpiderLabs/ModSecurity.git

Now that we have mod_security on our server, we'll install it:

## Install mod_security ##
$ cd ModSecurity
$ ./configure
$ make install

And we'll copy over the default mod_security configuration file into the necessary Apache directory:

## Copy configuration file ##
$ cp modsecurity.conf-recommended /etc/httpd/conf.d/modsecurity.conf

We've got mod_security installed now, so we need to tell Apache about it ... It's no use having mod_security installed if our server doesn't know it's supposed to be using it:

## Apache configuration for mod_security ##
$ vi /etc/httpd/conf/httpd.conf

We'll need to load our Apache config file to include our dependencies (BEFORE the mod_security module) and the mod_security file module itself:

## Load dependencies ##
LoadFile /usr/lib/libxml2.so
LoadFile /usr/lib/liblua5.1.so
## Load mod_security ##
LoadModule security2_module modules/mod_security2.so

We'll save our configuration changes and restart Apache:

## Restart Apache! ##
$ sudo /etc/init.d/httpd restart

As I mentioned at the top of this post, our installation of mod_security is good, but we want to enhance our ruleset with the help of OWASP. If you've made it this far, you won't have a problem following a similar process to install OWASP:

## OWASP ##
$ cd /etc/httpd/
$ git clone https://github.com/SpiderLabs/owasp-modsecurity-crs.git
$ mv owasp-modsecurity-crs modsecurity-crs

Just like with mod_security, we'll set up our configuration file:

## OWASP configuration file ##
$ cd modsecurity-crs
$ cp modsecurity_crs_10_setup.conf.example modsecurity_crs_10_config.conf

Now we have mod_security and the OWASP core ruleset ready to go! The last step we need to take is to update the Apache config file to set up our basic ruleset:

## Apache configuration ##
$ vi /etc/httpd/conf/httpd.conf

We'll add an IfModule and point it to our new OWASP rule set at the end of the file:

<IfModule security2_module>
    Include modsecurity-crs/modsecurity_crs_10_config.conf
    Include modsecurity-crs/base_rules/*.conf
</IfModule>

And to complete the installation, we save the config file and restart Apache:

## Restart Apache! ##
$ sudo /etc/init.d/httpd restart

And we've got mod_security installed with the OWASP core ruleset! With this default installation, we're leveraging the rules the OWASP open source community has come up with, and we have the flexibility to tweak and enhance those rules as our needs dictate. If you have any questions about this installation or you have any other technical blog topics you'd like to hear from us about, please let us know!

-Cassandra

January 29, 2013

iptables Tips and Tricks: APF (Advanced Policy Firewall) Configuration

Let's talk about APF. APF — Advanced Policy Firewall — is a policy-based iptables firewall system that provides simple, powerful control over your day-to-day server security. It might seem intimidating to be faced with all of the features and configuration tools in APF, but this blog should put your fears to rest.

APF is an iptables wrapper that works alongside iptables and extends its functionality. I personally don't use iptables wrappers, but I have a lot of experience with them, and I've seen that they do offer some additional features that streamline policy management. For example, by employing APF, you'll get several simple on/off toggles (set via configuration files) that make some complex iptables configurations available without extensive coding requirements. The flip-side of a wrapper's simplicity is that you aren't directly in control of the iptables commands, so if something breaks it might take longer to diagnose and repair. Before you add a wrapper like APF, be sure that you know what you are getting into. Here are a few points to consider:

  • Make sure that what you're looking to use adds a feature you need but cannot easily incorporate with iptables on its own.
  • You need to know how to effectively enable and disable the iptables wrapper (the correct way ... read the manual!), and you should always have a trusted failsafe iptables ruleset handy in the unfortunate event that something goes horribly wrong and you need to disable the wrapper.
  • Learn about the basic configurations and rule changes you can apply via the command line. You'll need to understand the way your wrapper takes rules because it may differ from the way iptables handles rules.
  • You can't manually configure your iptables rules once you have your wrapper in place (or at least you shouldn't).
  • Be sure to know how to access your server via the IPMI management console so that if you completely lock yourself out beyond repair, you can get back in. You might even go so far as to have a script or set of instructions ready for tech support to run, in the event that you can't get in via the management console.

TL;DR: Have a Band-Aid ready!

APF Configuration

Now that you have been sufficiently advised about the potential challenges of using a wrapper (and you've got your Band-Aid ready), we can check out some of the useful APF rules that make iptables administration a lot easier. Most of the configuration for APF is in conf.apf. This file handles the default behavior, but not necessarily the specific blocking rules, and when we make any changes to the configuration, we'll need to restart the APF service for the changes to take effect.

Let's jump into conf.apf and break down what we see. The first code snippit is fairly self-explanatory. It's another way to make sure you don't lock yourself out of your server as you are making configuration changes and testing them:

# !!! Do not leave set to (1) !!!
# When set to enabled; 5 minute cronjob is set to stop the firewall. Set
# this off (0) when firewall is determined to be operating as desired.
DEVEL_MODE="1"

The next configuration options we'll look at are where you can make quick high-level changes if you find that legitimate traffic is being blocked and you want to make APF a little more lenient:

# This controls the amount of violation hits an address must have before it
# is blocked. It is a good idea to keep this very low to prevent evasive
# measures. The default is 0 or 1, meaning instant block on first violation.
RAB_HITCOUNT="1"
 
# This is the amount of time (in seconds) that an address gets blocked for if
# a violation is triggered, the default is 300s (5 minutes).
RAB_TIMER="300"
# This allows RAB to 'trip' the block timer back to 0 seconds if an address
# attempts ANY subsiquent communication while still on the inital block period.
RAB_TRIP="1"
 
# This controls if the firewall should log all violation hits from an address.
# The use of LOG_DROP variable set to 1 will override this to force logging.
RAB_LOG_HIT="1"
 
# This controls if the firewall should log all subsiqent traffic from an address
# that is already blocked for a violation hit, this can generate allot of logs.
# The use of LOG_DROP variable set to 1 will override this to force logging.
RAB_LOG_TRIP="0"

Next, we have an option to adjust ICMP flood protection. This protection should be useful against some forms of DoS attacks, and the associated rules show up in your INPUT chain:

# Set a reasonable packet/time ratio for ICMP packets, exceeding this flow
# will result in dropped ICMP packets. Supported values are in the form of:
# pkt/s (packets/seconds), pkt/m (packets/minutes)
# Set value to 0 for unlimited, anything above is enabled.
ICMP_LIM="30/s"

If you wanted to add more ports to block for p2p traffic (which will show up in the P2P chain), you'll update this code:

# A common set of known Peer-To-Peer (p2p) protocol ports that are often
# considered undesirable traffic on public Internet servers. These ports
# are also often abused on web hosting servers where clients upload p2p
# client agents for the purpose of distributing or downloading pirated media.
# Format is comma separated for single ports and an underscore separator for
# ranges (4660_4678).
BLK_P2P_PORTS="1214,2323,4660_4678,6257,6699,6346,6347,6881_6889,6346,7778"

The next few lines let you designate the ports that you want to have closed at all times. They will be blocked for INPUT and OUTPUT chains:

# These are common Internet service ports that are understood in the wild
# services you would not want logged under normal circumstances. All ports
# that are defined here will be implicitly dropped with no logging for
# TCP/UDP traffic inbound or outbound. Format is comma separated for single
# ports and an underscore separator for ranges (135_139).
BLK_PORTS="135_139,111,513,520,445,1433,1434,1234,1524,3127"

The next important section to look at deals with conntrack. If you get "conntrack full" errors, this is where you'd increase the allowed connections. It's not uncommon to need more connections than the default, so if you need to adjust that value, you'd do it here:

# This is the maximum number of "sessions" (connection tracking entries) that
# can be handled simultaneously by the firewall in kernel memory. Increasing
# this value too high will simply waste memory - setting it too low may result
# in some or all connections being refused, in particular during denial of
# service attacks.
SYSCTL_CONNTRACK="65536"

We've talked about the ports we want closed at all times, so it only makes sense that we'd specify which ports we want open for all interfaces:

# Common inbound (ingress) TCP ports
IG_TCP_CPORTS="22"
# Common inbound (ingress) UDP ports
IG_UDP_CPORTS=""
# Common outbound (egress) TCP ports
EG_TCP_CPORTS="21,25,80,443,43"
# Common outbound (egress) UDP ports
EG_UDP_CPORTS="20,21,53"

And when we want a special port allowance for specific users, we can declare it easily. For example, if we want port 22 open for user ID 0, we'd use this code:

# Allow outbound access to destination port 22 for uid 0
EG_TCP_UID="0:22"

The next few sections on Remote Rule Imports and Global Trust are a little more specialized, and I encourage you to read a little more about them (since there's so much to them and not enough space to cover them here on the blog). An important feature of APF is that it imports block lists from outside sources to keep you safe from some attackers, so the Remote Rule Imports can prove to be very useful. The Global Trust section is incredibly useful for multi-server deployments of APF. Here, you can set up your global allow/block lists and have them all pull from a central location so that you can make a single update to the source and have the update propogated to all servers in your configuration. These changes are synced to the glob_allow/deny.rules files, and they will be downloaded (and overwritten) on a regular basis from your specified source, so don't make any manual edits in glob_allow/deny.rules.

As you can see, apf.conf is no joke. It has a lot of stuff going on, but it's very straightforward and documented well. Once we've set up apf.conf with the configurations we need, it's time to look at the more focused allow_hosts.rules and deny_hosts.rules files. These .rules files are where where you put your typical firewall rules in place. If there's one piece of advice I can give you about these configurations, it would be to check if your traffic is already allowed or blocked. Having multiple rules that do the same thing (possibly in different places) is confusing and potentially dangerous.

The deny_hosts.rules configuration will look just like allow_hosts.rules, but it's performing the opposite function. Let's check out an allow_hosts.rules configuration that will allow the Nimsoft service to function:

tcp:in:d=48000_48020:s=10.0.0.0/8
tcp:out:d=48000_48020:d=10.0.0.0/8

The format is somewhat simplistic, but the file gives a little more context in the comments:

# The trust rules can be made in advanced format with 4 options
# (proto:flow:port:ip);
# 1) protocol: [packet protocol tcp/udp]
# 2) flow in/out: [packet direction, inbound or outbound]
# 3) s/d=port: [packet source or destination port]
# 4) s/d=ip(/xx) [packet source or destination address, masking supported]
# Syntax:
# proto:flow:[s/d]=port:[s/d]=ip(/mask)

APF also uses ds_hosts.rules to load the DShield.org blocklist, and I assume the ecnshame_hosts.rules does something similar (can't find much information about it), so you won't need to edit these files manually. Additionally, you probably don't need to make any changes to log.rules, unless you want to make changes to what exactly you log. As it stands, it logs certain dropped connections, which should be enough. Also, it might be worth noting that this file is a script, not a configuration file.

The last two configuration files are the preroute.rules and postroute.rules that (unsurprisingly) are used to make routing changes. If you have been following my articles, this corresponds to the iptables chains for PREROUTING and POSTROUTING where you would do things like port forwarding and other advanced configuration that you probably don't want to do in most cases.

APF Command Line Management

As I mentioned in the "points to consider" at the top of this post, it's important to learn the changes you can perform from the command line, and APF has some very useful command line tools:

[root@server]# apf --help
APF version 9.7 <apf@r-fx.org>
Copyright (C) 2002-2011, R-fx Networks <proj@r-fx.org>
Copyright (C) 2011, Ryan MacDonald <ryan@r-fx.org>
This program may be freely redistributed under the terms of the GNU GPL
 
usage /usr/local/sbin/apf [OPTION]
-s|--start ......................... load all firewall rules
-r|--restart ....................... stop (flush) &amp; reload firewall rules
-f|--stop........ .................. stop (flush) all firewall rules
-l|--list .......................... list all firewall rules
-t|--status ........................ output firewall status log
-e|--refresh ....................... refresh &amp; resolve dns names in trust rules
-a HOST CMT|--allow HOST COMMENT ... add host (IP/FQDN) to allow_hosts.rules and
                                     immediately load new rule into firewall
-d HOST CMT|--deny HOST COMMENT .... add host (IP/FQDN) to deny_hosts.rules and
                                     immediately load new rule into firewall
-u|--remove HOST ................... remove host from [glob]*_hosts.rules
                                     and immediately remove rule from firewall
-o|--ovars ......................... output all configuration options

You can use these command line tools to turn your firewall on and off, add allowed or blocked hosts and display troubleshooting information. These commands are very easy to use, but if you want more fine-tuned control, you'll need to edit the configuration files directly (as we looked at above).

I know it seems like a lot of information, but to a large extent, that's all you need to know to get started with APF. Take each section slowly and understand what each configuration file is doing, and you'll master APF in no time at all.

-Mark

January 21, 2010

2010 PCI Compliance and You

I know you already know everything about PCI compliance, especially the if’s, and’s, and but’s that go along with it. But, just in case you forgot, here it is in a nutshell.
Is PCI compliance a Federal law? Nope! Not yet anyway. Some states do make it a crime to let credit card data “be” stolen.
What is PCI? It is actually PCI DSS and it stands for Payment Card Industry Data Security Standard.
Who needs it? Anyone that accepts, transmits, or stores ANY credit card data.
Are there different levels? Yes, I am glad you asked.

  • Level 4 – Any merchant processing fewer than 20,000 credit card e-commerce transactions in a 12 month period
  • Level 3 – 20,000 up to 1 Million transactions
  • Level 2 – 1 Million up to 6 Million
  • Level 1 – 6 million + (or any merchant that Visa feels should meet level 1 to minimize risks) This is what we are all striving for, right?

Who cares if you are PCI compliant? For starters, YOU should! And secondly, your merchant bank will care. They will care more the larger you get. See minimize risks statement above.
Since it isn’t a federal law should I risk it, because I know my security and I am impenetrable? I wouldn’t take that risk because you can still pay fines, card replacement costs, and pay for forensic audits, etc if someone were to get in and steal data.
How can SoftLayer help? For starters and a quick level 4 fix you can go here and get free scanning on a single IP. Combine that with a “quick” questionnaire about your physical and data security policies and voila, no onsite visit needed and you are now PCI Level 4. Mcafee can help you with you higher level compliance if you would like. Don’t take the questionnaire too lightly because remember you do care about PCI!
Ok so if you have made it this far then you must like boring reading. Go read this. It might come in handy someday. It is the “do this if you get hacked” cheat sheet.
On to 2010! MasterCard stepped up in 2009 and stated that even their Level 2 merchants had to have an onsite QSA assessment by December 31, 2010. That has now been pushed to June 30, 2011. There seems to be some confusion from the other Credit Card companies and they didn’t all jump on board. One thing that they did all agree on is that you can’t put credit card info on WEP secured wireless at all after July 2010. Just don’t do it! And don’t use old un-patched payment applications because they are insecure and will not be allowed after July as well.
This could all change just like Texas weather. If you don’t like the new rules, then just wait a couple of days and they may change it more to your liking. There are still a few things they are looking at going forward that I will let you in on and then I assure you I will stop typing. PCI 1.2 is still about stopping hackers from getting in, there is a new interest in the community on addressing “internal” hackers. The current focus of PCI is aimed at card data “after” authorization but doesn’t say much about card data that is kept prior to authorization, so you can bet that will be added soon too and of course cloud infrastructure and card data has to be on everyone’s radar screen soon.

Subscribe to rules