Sounds awesome, doesn’t it?
Unfortunately, I’m not talking about getting you elected to the highest, most powerful office in the world.
No, sadly I’m talking about the likelihood that your e-mail will get hacked and pictures of you in the shower will show up on the Interwebz.
Ask yourself when was the last time you sent an e-mail that you didn’t want anyone else to see? It may have been complaints about your boss, or sweet nothings to your girlfriend. It could have been tax or financial information, or perhaps something about a medical issue.
And you probably keep e-mail around forever, right?
I’ve seen people with thousands of e-mails still in their Inbox. They didn’t think to move them to another folder or delete them after they read them.
Receipts from online purchases. New account registrations and password changes. They just sit there like little gold nuggets, waiting for a miner.
The reality is, we all do it. Just like Ashton Kutcher, Sarah Palin and Lindsay Lohan, we normal people use e-mail for just about everything. And few truly think about or understand just how sensitive, or critical e-mail has become.
Until their undergiblets show up in a Google images search.
So take a moment today to manage that risk down a little. If your e-mail is compromised it probably exposes a whole pile of other things.
Make sure you have a good password. If your e-mail service offers multi-factor authentication (SMS, token, etc.), consider it. Delete e-mail that you don’t need anymore. Think about the things that you send through e-mail before you send them – if they ended up in the wrong hands would you be OK with it?
Because it may sound awesome, but you don’t want to be the next President.
Two weeks ago travelers in the Austin, TX Amtrak station got a big surprise – a squad of anti-terrorism forces armed with assault rifles and specialized inspection equipment. It was just one of hundreds of [probably not so] random appearances being made by the Transportation Security Administration’s (TSA) VIPR Team all across America.
The VIPR (Visible Intermodal Prevention and Response) team is not new, in fact it was launched in 2005 after the train bombings in Madrid. Its tactics, however have been changing over time. Random appearances are part of their “new strategy”.
Since September 11, law enforcement and counter-terrorism agencies have been focusing on the areas that, at the time, appeared to have the greatest exposure. Airlines, densely populated urban areas and critical infrastructure all made the list.
Unfortunately our enemies are smart enough to strike where we our defenses are least fortified.
Enter the VIPR Team.
The bombing in Madrid ushered in a new phase of terrorism, and subsequently a new phase of security. Our enemies began attacking softer targets, becoming more unpredictable. It was the definition of terror. We could take a few lessons from this new thinking.
During a half-day conference in Albany, NY recently we had the opportunity to speak to over one-hundred security professionals about the current state of information security. We discussed current trends, new threats and some recently targeted organizations. When it was over, we passed around a pocketknife and about a hundred audience members joined our wolfpack.
Perhaps most important of all the topics we discussed was the failure of the things we trust most in information security today. Cornerstones like defense in-depth, antivirus and least privilege. They all sound great, but the problem is, they’re not working.
Maybe it’s because we don’t have the resources. Maybe it’s because security still isn’t a priority for many organizations. Maybe it’s because we’re not measuring performance.
Or maybe, just maybe, these things are so predictable that our enemies know exactly how to get around them.
If I were an Internet criminal operating out of unsaid country in Eastern Europe, I would have a pretty good idea of where to start. I’d know which rootkits and payloads I’d need to deliver, and how to get them to their intended targets.
I’d know pretty much what to expect once my backdoor was operational, and I’d have a pretty good idea of how to pivot around my subject’s network. I’d know how to exfiltrate my objective and which tracks to cover.
And this goes for any organization.
How could this be? It’s not because I’m that smart or have intel on every company out there. It’s because most organizations [don’t] defend themselves in the same way.
So here’s an idea; the next time an uninvited intruder shows up on your network, surprise them. Utilize a control in a different way or implement it somewhere it normally isn’t found. Take a look at all of the things you’re doing, turn them 90 degrees, spin them once and give them a kick and see where they land. If they could be effective there in a different way, consider making the change.
Predictability is a vulnerability in itself. The VIPR Team has figured this out and so can we.
I feel proud today.
Like apple pie, hot dogs and online bank fraud, nothing is more American than personally selecting (kinda) the next President of the United States. And doing it in the hometown of Uncle Sam makes it that much more special.
But lately I’ve become more concerned about the integrity of my vote.
My concern is not with the security of the voting machines. There are only a few different types of electronic voting machines, including optical scanners and direct recording machines, where voters press buttons that are digitally recorded. And both types of machines have been compromised on numerous occasions.
In one case the voting machine was so vulnerable researchers were able to install Pac-Man on it. One team member was quoted, saying that it only required an 8th-grade education and $10.50 to hack the machine.
We also know that the networks, storage and computers that the machines rely on are vulnerable. As are the people involved in the voting process.
But this is not my concern.
What I find most worrisome is, if and when it happens, how will we know?
Happy voting America.
Hurricane Sandy, appropriately named after a slow-moving but powerful family member of yours truly, spent the last few days wreaking havoc on the East Coast.
And while some of us made it through with just a bit of sideways rain, I’m sure there are more than a few business out there putting a Business Continuity Plan on their “To Do” list this morning.
Better late than never, they say.
Or is it? After all, Upstate New York has experienced an earthquake, a tornado, epic flooding and two hurricanes in the past fifteen months. This in an area that is considered relatively protected from Mother Nature.
Tonight, on All Hallows’ Eve, most of us will engage in some sort of ghoulish tradition, whether carving a pumpkin for the front stoop or trick-or-treating with the kiddies. And yet we know that most, if not all of these activities can end in some kind of trouble.
Chances are good that the creepy teenager down the block with the acne and the freakishly thick eyebrows is going to smash your pumpkin. Someone’s car is going to get a clean shave. And Mrs. McGillicutty’s willow tree is probably getting TPd.
But despite all of this, we trust our kids and neighbors to make it through the night without serious damage. We trust that things won’t get out of hand. Without trust that people won’t kill each other over a bag of treats.
And in that apparent weakness lies one of our greatest strengths. In trust we gain the ability to go about our lives. To interact with others. To exist.
Without trust, we could not walk down the street at night without checking every dark corner. We couldn’t approach a stranger’s door without a background check. We couldn’t eat candy without inspecting every chocolatey bite.
Without trust, we could simply not function.
Trust is at the heart of every security model on planet Earth. Despite popular wisdom, the security controls that we put in place to protect our information, people and other assets imply some measure of trust in their relationships.
We trust that a firewall will disallow specific protocols on specific ports. If we didn’t we wouldn’t buy them. But like the creepy kid down the street, trust only goes so far.
At some point, you need to verify.
And what better time than Halloween for a lesson in verification? Whether it’s the batteries in your flashlight, the traffic crossing in front of your little Spiderman or the brastrap on your girlfriend’s Lady Gaga BaconSuit costume, some times you just need to verify.
Halloween is no time for a wardrobe malfunction.
Sometimes, security just sucks.
It was never meant to be that way. In fact, done properly security should support a business goal or a higher-level strategy. When it’s done well, security is not painful and it serves a purpose. It protects things worth protecting. It saves our @sses.
When it’s not done well, well…
I went out-of-town for a few days last week for the holiday. It was a last-minute decision, but a good one. The trip was short and sweet, and local. I used a hugely popular travel web site to make hotel reservations. To protect the not-so-innocent, the travel provider will remain nameless. But let’s just say that it wasn’t Expedia or Orbitz and it starts with a “hotels.com”.
Lately we’ve been using this service for business travel, as you can rack up free hotel stays quickly as long as you make reservations through their web site. Of course, you need to log in to your account before making your reservations – this I would learn the hard way.
The trip was wonderful – we did some biking, ate some great food and got to sleep in. Things all vacations should be made of.
Getting credit for the hotel stays was another story.
What I thought would be a quick call to the provider, started out bad and turned worse.
“Thank you for calling [hotel provider], can I help you?”
I explained that I needed to add credits to my account for stays that I had just completed. The customer service representative immediately requested my name, account number, DNA chains and a bunch of information that made me queasy. I asked politely why they needed this information for this activity, and why they would have had this information anyway. I certainly hadn’t provided it prior. These are hotel reservations after all, not the codes to The Football.
I then asked her if she could get me the secret recipe for Coke, while she was at it. Either she didn’t get it or she didn’t think I was funny.
Making a long story short, I will be calling my hotel provider back on Monday, as this situation still isn’t resolved.
This is why people shudder when IT or their company’s Information Security team start talking about reinforcing security controls or “locking things down”. Forget matching your organization’s culture and personality with your controls (which we almost never experience), but let’s remember that your security implementation should match your risk.
Even the Secret Service lets the President kiss a few babies.
I will be calling back on Monday and immediately asking for a supervisor. When I get him or her on the phone, I will do my best to refrain from security advice.
But I might still ask for that Coke recipe.
Earlier this month, security media were ablaze with news of the freshly discovered Flame malware toolkit, which according to reliable sources began infecting Iranian computers as early as 2008.
Since the first reports, we’ve learned more about Flame, its capabilities and intent. The results of this analysis have been impressive and sobering.
Like its alleged sibling, Stuxnet, Flame is highly sophisticated, purpose-built and effective. As someone who spent many years in software development, I appreciate what it takes to write code for many platforms and devices while minimizing flaws. The authors of Stuxnet and Flame deserve credit for this, if nothing else.
Unlike Stuxnet, Flame is a toolkit – a veritable Swiss Army knife – of attacks that can be activated remotely by its command and control operator. The Flame payload is delivered such that all of the modules are available and integrated into the initial assembly, with no additional download or communication required.
Bluetooth sniffing, keylogging, an Autorun infector, the ability to hijack the Windows AutoUpdate function and more – up to twenty unique modules – all nicely packaged in one nefarious kit.
With all of this, Flame may have supplanted Stuxnet as the most complex and sophisticated piece of weaponized software ever developed in the [known] history of mankind.
But as powerful as Flame seems, the economic ecosystem on which its built may be even more interesting.
For decades, Microsoft, Adobe, Google and Oracle have been recruiting, paying for and getting the absolute best and brightest software designers, architects and developers on the planet. Until now.
In this post-neo-infosec-challenged world that we live in, the uber-software Gods work for the bad guys.
You may not put it on your CV or LinkedIn profile, but if you want a fun, exciting, incredibly well-paying job writing the newest, coolest and most coveted code on the planet, move to Romania and hook up with a Russian cybergang.
And it gets worse. As these malicious international software factories become more successful, they get richer, they buy better people and the cycle repeats.
Over the past several weeks the FBI, Interpol and other international law enforcement agencies arrested twenty-four individuals suspected of various card fraud schemes and activities. Suspects were spread out across thirteen countries around the world. One of them was arrested less than 45 minutes from GreyCastle Security headquarters.
None of them were software developers.
The people most typically being arrested for online crime are the individuals using the tools, not the ones building them. No, these digital mercenaries are tucked safely away in their posh Baroque villas on the outskirts of some small town in Estonia, busy writing their next module and withdrawing laundered cash from untraceable bank accounts.
And the hits keep coming. And the fire burns brighter.
Flame may just be the spark that starts the inferno.
The United States Military has spent a lot of time and money developing its Special Operations forces. These elite teams of security operatives are highly trained to shoot, move and communicate under duress, with little or no advance notice of their impending scenario. Most missions involve dynamic entry, assessment and neutralization of threats, all while achieving a primary objective.
Seal Team 6 is the most famous of all the Special Operations teams. These are the operatives that led the capture and termination of Osama bin Laden, the maritime rescue of American sailors on the Maersk and the recent recovery of foreign journalists held hostage by Al-Shabaab terrorists. The best of the best has handled the worst of the worst.
Imagine what you could get done if you had your own Seal Team 6.
Think it sounds crazy?
On the surface, this legendary team appears to have accomplished near-mythical tasks. Once you start to break their missions and objectives down into their most basic milestones, actions and counteractions, however, their methods become much simpler.
Like Seal Team 6, cybersecurity has become a household word and responding to security incidents has become a common occurrence. Intellectual property, sensitive data and bank accounts have never been more at risk, and detecting, containing and correcting security incidents requires planning, practice and grace under pressure. Once developed, your Computer Incident Response Team (CIRT), Special Operations Team (SOT) or whatever you call it (just don’t call it Seal Team 7, that’s taken) will give your organization a “force multiplier”.
Force multipliers are assets that give your organization a strategic advantage and increase the effectiveness and efficiency of overall operations. These assets, or teams multiply the force of your organization because they carry out tasks that would normally require much greater resources to accomplish. This is possible due to the highly trained and focused nature of the team.
And yes, you can have your own.
Here’s what you need:
- Find the right operatives – Incident responders are born, not made. This isn’t completely true, but it takes a certain mindset and attitude to gain situational awareness, remain calm and lead when an organization is under attack. These skills are tough to teach. You probably already know who your operatives are. If you don’t, stop here and look for a qualified security provider that already delivers Incident Response services.
- Select the right weapons – A good shooter can make a bad gun shoot well. You don’t need the most expensive security tools, but you do need to make what you’ve got work effectively. If you’re about to embark on a digital forensics mission, you better have the right tool in your bag. And carry a backup. Your tools should meet their requirements and work consistently.
- Train, train, train – Training is the most important of all, and it should incorporate the following:
- The basics – Every Navy Seal is required to qualify with a rifle every year, regardless of his or her rank. It’s part of being a Seal. The ability to shoot, move and communicate makes our Military the greatest on Earth. Your CIRT resources should understand the basics of threat modeling, analysis, communication and tool manipulation. No exceptions.
- Scenario drills – Your training should incorporate regular attack and counterattack simulations – these should be as real as possible. Good penetration testing is critical, but tabletop exercises and other drills are important parts of the program. Training should incorporate “unplanned” scenarios, performance assessments and any analysis that will help the team perform during a real situation.
- Bugout – Training should include handling worst-case scenarios, or what the Military calls SHTF. When things really get sideways, you need to know what to do, when to do it and who’s going to do it. Crisis Management should integrate with your Incident Response Plan.
Regardless of what industry you’re in or what size your company is, at some point you will become a statistic (if you haven’t already). A Special Operations team specifically trained and ready to handle incidents could mean the difference between an achieved objective and a botched mission.
You may not be hunting terrorists or saving foreign diplomats, but your CEO might just give you a Medal of Honor.
It has become a common occurrence to hear about companies, governments and individuals being compromised by hackers.
Thanks to Anonymous, “the Chinese” and a bunch of kids from a country no one can pronounce, security has become a household word.
Seemingly overnight, information security has moved from a cottage industry to one that finds its sordid tales on the cover of every major periodical and leading every major newscast. It’s no secret that this condition exists because our adversaries have been and continue to be successful, to the tune of billions of dollars in intellectual property, bank accounts and defaced reputations.
Things have gotten sideways.
Many continue to ask why this situation persists, or from some perspectives, worsens. The answer is simply Newtonian – An object that is in motion will not change its velocity unless an unbalanced force acts upon it.
It’s time for an unbalanced force.
The US Military has developed tactics for when things get really sideways. For those life-or-death situations when you’re injured, surrounded by enemies and cut off from your support network. These tactics are called Escape and Evasion, and their applications aren’t limited to military survival.
As you read this, your critical assets sit unprotected. Not because you haven’t deployed firewalls, access controls and network segmentation, but because when those security controls are compromised (and they will be) those critical assets will be unable to protect themselves. They are inherently vulnerable, which is why they need compensating controls.
Enter Cyber Escape and Evasion.
For decades security professionals have been hardening perimeters, blacklisting bad actors and “locking things down”. These practices emerged when cyberwarfare was symmetric, when adversaries were [better] known and when cyberassets were few[er]. Sadly, these practices remain the foundation for many organizations, despite dramatic changes in attacks and attackers.
There are, however, some new concepts emerging regarding the protection of critical assets.
Imagine that your confidential data was camouflaged such that an unauthorized intruder couldn’t tell the data from the container. Imagine that your sensitive information assets were stored so randomly that hackers couldn’t make sense of them, even if they were discovered. Imagine that you deployed information decoys in such a way that it was difficult or massively time-consuming to tell which was the real source. Imagine that your sensitive data, once removed from its authorized container, could poison itself, much like the ink canister that is thrown into a bag of stolen cash.
What if the next time you were attacked, you could flood your attacker with false-positives and false-negatives, effectively disabling their ability penetrate your network?
These are just a few of the security tactics that are starting to get real attention. Each of these concepts moves security controls closer to the asset and emphasizes intelligence over building walls.
If you trust statistics, an intruder has already compromised the networks of 1 out of every 10 people reading this blog post. 6 more of those 10 will be hit sometime later this year. A recent study showed that most security professionals expected their security program to fail when it was truly tested.
I’ll save you the angst of asking the same question.
If there was ever time to inventory your assets, pack a “go” bag and assess your capabilities, it’s now. Things have gotten sideways and your firewall can’t save you. Your critical assets are either going to keep calm, signal the rescue chopper and be exfiltrated by their Security Officer, or their going to apply a tourniquet and die quietly as they’re dragged off to a POW camp.
What are your orders, sir?
The US Government is getting ready to pass the Cybersecurity Act of 2012.
In this 205-page bill is legislation mandating that entities deemed “critical infrastructure” meet security standards set by the government, including the Department of Homeland Security. The proposed law “is the product of three years of hearings, consultations, and negotiations,” the intent of which is to secure systems which “if commandeered or destroyed by a cyber attack, could cause mass deaths, evacuations, disruptions to life-sustaining services, or catastrophic damage to the economy or national security.”
Like all other compliance mandates, it will fail.
Now let me first say that I am in no way anti-government (except in April), nor would I like our electrical grid, nuclear plants or water distribution facilities left exposed. However, government mandates are unlikely to solve the problem.
- Compliance Mandates are Latent – By definition, compliance regulations are developed and implemented after a threat has been identified. Add to this inherent issue the time it takes for a bureaucrat to understand and measure risk, hire analysts to author a bill and weave it’s perceived benefit into their re-election strategy, we’ve left any potential legislation years behind its need. Compliance is not timely, nor can it be.
- Compliance Mandates are Optional – For compliance requirements to be truly successful, all entities subject to regulations would be complying in some way. Unfortunately this isn’t the case, nor is it realistic. Asking the Government to audit all organizations would require armies of people and even bigger piles of money. Some regulations have introduced self-assessments to ease this burden, which has only led to inconsistency in reporting and implementation. Ever heard of anyone going to jail for HIPAA violations? Compliance is not mandatory, nor can it be.
- Compliance Mandates are Vague – Anyone who has read the HIPAA Administrative Simplification or FFIEC Guidance knows that the Government is good at telling you what to do, but not how. And honestly, they really can’t be. How could such a broad technical standard be developed for so many different organizations? It might feel a little Draconian if the Feds told you exactly what directory services to use for authentication. Add to this challenge differing interpretations, language and changes in technology. Compliance is not prescriptive, nor can it be.
Despite its good intentions, compliance does not bring security. In fact, it may be having the exact opposite effect. In a recent survey, security administrators found themselves spending between 25 and 100 percent of their time on compliance efforts, all while databreaches were increasing at their organizations.
So what’s the answer?
Let’s trade compliance for security. Rather than penalizing those that aren’t in compliance, how about rewarding those that are secure? If we took the billions that the government spends every year on HIPAA, FISMA, SSAE16, FFIEC, SEC, FIPS, DHS, TSA and the thousands of other regulatory bodies, their audits, personnel and other perfunctory functions and instead spent that on real security education for the right people, we’d be far ahead of where we are today.
If they wanted to go the extra mile, Lieberman and Company could help organizations implement metrics to tell how well they were performing against their security programs. If they wanted to get real fancy the Government could subsidize real risk assessments for organizations in “critical infrastructure”. They’d probably still have money left over for tracking terrorist hashtags on social media.
For most of us, compliance is here to stay. The question is – just how far from real security will it diverge?
Just ask TJX, Heartland or Sony.
Yesterday I was waiting in the lobby of one of our larger clients as I had arrived a bit early for a meeting. I was doing something really useful on my BlackBerry to kill time when a thirty-something year-old woman walked in and approached the receptionist. To protect the not-so-innocent, we’ll refer to her as Jane.
What I’m about to tell you is a true story.
Jane: “Hi, I’m here to see [name deleted] but I think I may be in the wrong building.”
Receptionist: “OK, where do you think you’re supposed to be?”
Jane: “Hold on let me call my office and I’ll find out.”
Jane now steps away from the receptionist desk, pulls her mobile phone from her purse and immediately begins dialing her office for information. She reaches someone who appears to be her assistant, given the following conversation. We’ll make some assumptions about the Assistant’s dialogue.
Jane: “Hi [name deleted] can you do me a favor? I need you to access my calendar to see where my meeting is this morning, I think I’m in the wrong building.”
Assistant: “No problem Jane! How do I get access to your calendar?”
Jane: “My password is ‘Password1’ with a capital ‘P’. Yeah I know it sucks.”
Assistant: “OK well I can’t get to your calendar from my PC.”
Jane: “Yeah you can use my PC, I never lock it.”
Cue Quentin Tarantino soundtrack, an ultra-closeup of highly polished men’s dress shoes as they one-by-one, shuffle towards a thirty-something woman in a black suit, the staccato click of their heels shattering the deafening silence now engulfing the steel and glass lobby, cut to a super-tight shot in slow-motion of a GreyCastle Security business card being drawn from inside suit pocket –
“Hey Reg! Sorry I’m late.”
As I’m snapped from that dreamscape carved straight from a Hollywood set, I realize that we can’t save everyone, and not everyone wants to be saved.
I hope Jane made it to her meeting on time. I hope she changed her password when she got back to the office and has started locking her PC. And her phone. I hope the title on her business card doesn’t say Comptroller. I hope Jane doesn’t have to learn the hard way that just a little bit of security can go a long way.