The realm of software and internet security is so broad, it's hard for anyone to claim to be an expert across the entire range of security concerns. For example, your organization's weakest link could be its routers and firewalls where acronyms like DMZ and SPI are common lingo. It could be at your system level, in kernel patches and intrusion detection software. Or it could be how your software's tiers are architected. For example, if your webapp uses https, but that terminates at a web server, whose SQL calls back to the database server are sent unencrypted, you've only solved part of the problem. Or perhaps your programming language is easily susceptible to buffer overflows? Or maybe your app doesn't validate its inputs properly, allowing cross-site request forgery or cross-site scripting attacks? Or maybe it's a basic data problem where sensitive data like passwords or credit cards are stored in clear text. Maintaining security across all of these layers is enough to intimidate even sophisticated professionals.
But there's hope. Much like real-world security, the first step is vigilance and presenting a hard target. The safest house on the block isn't one where the owner made it into a veritable Fort Knox. Often, it's sufficient to harden the target enough to convince malicious people to look elsewhere. So what follows are the most critical steps that you can take, whether you're responsible for your home wifi network, whether you're a hacker-entrepreneur-self-proprietor, or in charge of a handful of engineering teams at a well established online company.
The first thing to realize is that malicious hackers continually run probes across the internet, harvesting the responses from potential soft spots. A hacker's choice is often picked from those representing a particular exploitable profile. The way this type of attack works is for hackers to monitor security mailing lists and websites looking for vulnerable systems and published exploits.
It's similar to hanging out at a shopping mall and compiling a database of where various makes and models of cars are parked. When that hacker noticed the '86 Ford Thunderbird drive by, he just noted where it parked. But some time later, while monitoring vulnerabilities announced, when he runs across a brand new exploit announced for an '86 Ford Thunderbird, he knows where to go to try out the latest technique.
Fix your Leaks
"Information leakage" is the practice of allowing details out that you have no business exposing, but which represents a risk to your security profile. When a hacker is deciding, randomly, on whom to attack, this can mean the difference between an exploited vulnerability and a potential debacle.
Some argue that "security by obscurity" (simply hiding what you're doing) isn't really security. But in my book, a great many exploits can be avoided by making it difficult for the "script kiddies" (hackers that are largely successful because they follow a recipe) to know what you're running.
For example, most software is happy to advertise what version it is. For example, even www.microsoft.com reports that they are running: "Microsoft-IIS/7.5". Searching a security website, there is apparently a "Microsoft IIS FTP Service Remote Buffer Overflow Vulnerability". Now I know that could be an avenue for me to exploit microsoft.com, particularly if they are running an unpatched version of IIS to serve FTP services.
And even if Microsoft.com isn't vulnerable, how many millions of websites are running it? If I'm a determined hacker, I've already harvested a list of servers which listen on port 21 (the FTP port), and now I just need to crosscheck that list for a server that was gracious enough to report itself as "Microsoft-IIS/7.5" and see which of my automated port scans indicated that they are listening on port 21. With such a list generated by automated robots/probes, I could potentially exploit that flaw on dozens, possibly hundreds of systems.
Bottom line: Don't tell the world what software you use. If you must, limit it to general concepts. While there are more sophisticated ways for people to tell what you're using, there's no point in lowering the bar so low that the most unaccomplished hacker can find an exploit to use on you.
Keep a Low Profile
If it's not mission critical, it should not even be installed, not to mention running or accessible outside your firewall! After all, anything accessible by a customer should be considered a production system. You should know precisely what "attack vectors" your production systems possess. Namely these are the services that you must keep open because that's what it takes to do business.
Everything else should be locked down, shut down, and, if possible, uninstalled. Much like in Monster's Inc. your corporation should appear to the world behind your universe to be a simple portal. You can't go around it, you have to come through the front door where you can control what comes in and what goes out.
The same thing holds for the rest of your systems. If nothing but your web servers should be talking to your SQL servers, why aren't they locked down to only (aside from administrative access that you monitor and manage like a hawk, right?) allow access from those servers?
Finally, remove all unnecessary accounts-- demo/test accounts, old employee accounts, default users and passwords have no place in a production system.
Bottom line: Nobody in their right mind hangs out in a hurricane wearing a parachute. Reduce your profile, and expose as little as possible of yourself to the tempest.
Stay Up To Date
For every malicious hacker who uses security vulnerability information to find exploits, there are hundreds upon thousands of computer security professionals who pore over the products, look for holes and report vulnerabilities so they can be fixed. This is good news because, generally the misanthropes are outnumbered by several orders of magnitude. That's good news, provided, of course, that you take the fruits of their hard work and exercise them: You install software patches and fixes to your critical infrastructure as soon as is reasonably possible.
If you recall, the probes are out there, they're running and people are trying to determine "what site uses what software?"... This means that as soon as a vulnerability is published for software you're running, the clock is ticking between someone who is aware of the vulnerability exists for your system, and them getting around to actually knocking on your door and finding that it's wide open.
Bottom line: If hackers have to hit you within 2-4 hours of an exploit being available, you need to be a really attractive target, you need to have extremely dedicated hackers (ones that are out to get you), or some combination thereof to have such exploits used against you. But when you're lazy and don't install updates when they're available, it's like putting the welcome mat out.
Know Your Environment: "Threat Model" It
The basis for threat modeling is best exemplified in this scene from the movie "Twister". Essentially, hide in the small shed where there's a limited number of things that can kill you instead of in the big barn with all kinds of sharp, rusty, pointy objects.
This goes to software engineering best practices. As a simple example, if you are building a social networking feature including friending, make sure you don't design the feature to accept as parameters the friendee as well as the friender. That's because web applications should never trust data being provided by a user. Your application should refer to its own record of who the authenticated user is before adding a user as a friend. This is good design that experienced developers know to implement.
However, even designs produced by the most experienced developers are potentially flawed when the design phase doesn't include a "threat modeling" exercise where devil's advocate questions are asked.
Bottom line: Asking questions like "how could this be used to ill intent?" to "what happens if someone tries to do this hundreds of times a minute?", threat modeling helps developers understand what emergent properties their designs should have... That is, what are the requirements for the application that the business owner would never know to ask, but are assumed to be requirements?
Manage Reporting & Analytics
I'm a firm believer in determining the "performance envelope" of a system. In engineering physical systems, for example, engineers will establish the proper environment in which their design should be used, and calibrate their instrumentation to those criteria.
For example, if an airplane designer knows their airplane should not stall above 60 knots, the air speed indicator dials might be calibrated to show green indicators at speeds above that. Conversely, warning lights might light up if you near 80 knots. Alarm bells might go off once you reach speeds at 60 knots or below to indicate "you're about to crash".
All of your critical system logs should have monitoring scripts set up to indicate when things are "outside the envelope". Whether your monitoring tells you these things directly, or just reports the metrics and you can determine whether they're out of whack is a matter of preference.
But you really should know how many people are coming to your site each day. If that number climbs by a factor of 100, wouldn't it be nice to know that 99% of the traffic came from 4 different IP addresses? That might indicate a DoS attack or some poorly behaving robot! How many people typically submit password or email change forms each day? How many 404 errors do you get each day? How many of them are for legitimate URLs you used to have and how many are probes from hackers?
That also means keeping your critical logs as clear as cruft as possible. If it's not a critical error, why are you logging it? If you're debugging that should be limited in your production environment, and silenced when debugging is completed. If you're not comfortable writing your own tools to tell you what's within the envelope and what's pushing it, tools like Splunk can help with basic intrusion detection and log analysis tasks.
Bottom line: Every system, even stochiastic ones like web site environments and corporate computer networks, have predictable ranges within which the bulk of their activity should adhere to most of the time. The goal is to be aware of the patterns your site exhibits so you can notice anomalous behavior.
These are some basic steps that any one with any level of sophistication can implement. As a broad strategy, they should help harden your site to make it become a less attractive target, and should you be a very juicy target for a determined hacker, the last step, in particular, can at least let you know the barbarians are at the gate so you know to enlist the help of an expert.