Posts Tagged 'Attacks'

December 30, 2012

Risk Management: Event Logging to Protect Your Systems

The calls start rolling in at 2am on Sunday morning. Alerts start firing off. Your livelihood is in grave danger. It doesn't come with the fanfare of a blockbuster Hollywood thriller, but if a server hosting your critical business infrastructure is attacked, becomes compromised or fails, it might feel like the end of the world. In our Risk Management series, and we've covered the basics of securing your servers, so the next consideration we need to make is for when our security is circumvented.

It seems silly to prepare for a failure in a security plan we spend time and effort creating, but if we stick our heads in the sand and tell ourselves that we're secure, we won't be prepared in the unlikely event of something happening. Every attempt to mitigate risks and stop threats in their tracks will be circumvented by the one failure, threat or disaster you didn't cover in your risk management plan. When that happens, accurate event logging will help you record what happened, respond to the event (if it's still in progress) and have the information available to properly safeguard against or prevent similar threats in the future.

Like any other facet of security, "event logging" can seem overwhelming and unforgiving if you're looking at hundreds of types of events to log, each with dozens of variations and options. Like we did when we looked at securing servers, let's focus our attention on a few key areas and build out what we need:

Which events should you log?
Look at your risk assessment and determine which systems are of the highest value or could cause the most trouble if interrupted. Those systems are likely to be what you prioritized when securing your servers, and they should also take precedence when it comes to event logging. You probably don't have unlimited compute and storage resources, so you have to determine which types of events are most valuable for you and how long you should keep records of them — it's critical to have your event logs on-hand when you need them, so logs should be retained online for a period of time and then backed up offline to be available for another period of time.

Your goal is to understand what's happening on your servers and why it's happening so you know how to respond. The most common audit-able events include successful and unsuccessful account log-on events, account management events, object access, policy change, privilege functions, process tracking and system events. The most conservative approach actually involves logging more information/events and keeping those logs for longer than you think you need. From there, you can evaluate your logs periodically to determine if the level of auditing/logging needs to be adjusted.

Where do you store the event logs?
Your event logs won't do you any good if they are stored in a space that is insufficient for the amount of data you need to collect. I recommend centralizing your logs in a secure environment that is both readily available and scalable. In addition to the logs being accessible when the server(s) they are logging are inaccessible, aggregating and organize your logs in a central location can be a powerful tool to build reports and analyze trends. With that information, you'll be able to more clearly see deviations from normal activity to catch attacks (or attempted attacks) in progress.

How do you protect your event logs?
Attacks can come from both inside and out. To avoid intentional malicious activity by insiders, separation of duties should be enforced when planning logging. Learn from The X Files and "Trust no one." Someone who has been granted the 'keys to your castle' shouldn't also be able to disable the castle's security system or mess with the castle's logs. Your network engineer shouldn't have exclusive access to your router logs, and your sysadmin shouldn't be the only one looking at your web server logs.

Keep consistent time.
Make sure all of your servers are using the same accurate time source. That way, all logs generated from those servers will share consistent time-stamps. Trying to diagnose an attack or incident is exceptionally more difficult if your web server's clock isn't synced with your database server's clock or if they're set to different time zones. You're putting a lot of time and effort into logging events, so you're shooting yourself in the foot if events across all of your servers don't line up cleanly.

Read your logs!
Logs won't do you any good if you're not looking at them. Know the red flags to look for in each of your logs, and set aside time to look for those flags regularly. Several SoftLayer customers — like Tech Partner Papertrail — have come up with innovative and effective log management platforms that streamline the process of aggregating, searching and analyzing log files.

It's important to reiterate that logging — like any other security endeavor — is not a 'one size fits all' model, but that shouldn't discourage you from getting started. If you aren't logging or you aren't actively monitoring your logs, any step you take is a step forward, and each step is worth the effort.

Thanks for reading, and stay secure, my friends!

-Matthew

October 8, 2010

From Zero to Ten in 10

Our second Dallas data center went live 10 days ago and we are already pushing 10 GB of sustainable traffic out the door. I have spent some time in the DC with some of our ops guys, and the place is impressive.

A terrific amount of computing power sits in row after row of server racks, driving a diverse array of business to more than 110 countries. Each rack features powerful processors, lots of RAM and heaps of storage. There is very little that our customers are unable to do over Softlayer’s infrastructure. And if they need more, SoftLayer can add additional servers very quickly to meet this demand. I wish the rest of our business were as simple as this.

Despite the state of the art infrastructure that sits in the DC, it remains a challenge to meet the needs of our customers. Why? Network, that’s why. SoftLayer’s challenge will be to continuously stay ahead of our customers’ demands, primarily in the network. If the network is unable to support the traffic that is pushed across our DC, everything comes tumbling down.

To a degree, we are victims of our own success. As we add servers to racks, we are placing increasing demand on the network. The more successful we are, the more pressure we place on the network.

Consider the following statistics:

  • When SoftLayer went live five years ago, we used two carriers and pushed 20 Gbps out the door.
  • Four years ago, this had gone up to four carriers and eight 10 Gbps links.
  • In January 2009 we pushed about 70 Gbps of sustained traffic. And this doubled for President Obama’s inauguration.
  • Today we use over ten carriers, with over 1000 Gbps of capacity.
  • In addition to the needs that our customers drive, we cannot forget to consider DDOS attacks as DDOS attacks add significant load to the network. We consistently absorb and successfully defend attacks of 5 Gbps, 10 Gbps or more and the peaks have grown by a factor of ten since SoftLayer went live.

The trend revealed is significant – in five years the amount of traffic sustained over our network has increased by more than ten times. And it shows little signs of slowing down.

Suffice to say, we spend a significant amount of time designing our networks to ensure that we are able to handle the traffic loads that are generated – we have to. Aggressively overbuilding the network brings us some short term pain, but if we are going to stay ahead of demand it is simply good business (and it makes sure our customers are happy). The new DC in Dallas is a great example of how we stay ahead of the game.

Each server has 5 NICs – 2 x 1 Gbps (bonded) for the public network, 2 x 1 Gbps (bonded) for the private network and one for management. The net of this is that customers can push 2 Gbps to the internet assuming server processors can handle the load.

-@quigleymar

December 3, 2009

Hey, I just got an email saying I won a million dollars! *Click* Wait, what just happened to my computer?

This is usually how it starts. Some shady person sends out spam telling people they have one a million dollars or a free laptop or mp3 player with a link a form they need to fill out to claim their prize. Only you don’t win an mp3 player or laptop. You win an infected computer that is now a drone in a much larger botnet. This botnet is either for direct malicious purposes (Denial-of-Service attacks) or indirect malicious purposes (spam, phishing, etc). How do you stop this from happening to you and you becoming “that guy”? Don’t click links in email unless you’re 100% sure who it’s from and what it’s for. That’s the basic rule to remember. Secondly, make sure you have an anti-virus program that’s capable of scanning email and keeping your system protected from malicious browser exploits. Thirdly, (and this should go without being said, but I’m saying it anyways) make sure your computer (and all software) is up-to-date. Sure, there’s the occasional bug and 0-day exploit on up-to-date systems, but there’s a whole slew of exploits and things that can be done to an un-patched system. Keep your systems up-to-date and you reduce the “known” exploits from literally thousands to maybe a few.

Think about this, 80% of the world’s email is considered spam. Of that 80%, the vast majority (more than 75%) is sent using infected computers (drones). If everyone would re-think blindly clicking links in emails and on webpages (social networking sites have a history of people trying to fool users into clicking bad links) then the spammers wouldn’t have drones available to them to send spam. Interesting thought, isn’t it? Let’s stop spam by being smart internet users and denying the “bad guys” the resources they need to send out the spam.

Subscribe to attacks