Saturday, February 2, 2013

New York Times Hack and Symantec

If you're the kind of person that monitors news relating to security in technology or have been paying attention to headlines in the mainstream media, you may have seen the news stories detailing the infiltration of the New York Times' network by the Chinese.

The details are surprisingly thorough for a mainstream story, and the Times is being rather candid in their sharing of details. Usually when a business is "hacked" they'll do anything and everything possible to hide the details from the public so they can save face.

For people in the tech industry the story is still overly simplified and light on gritty details, but for a story aimed at public consumption the details get gory. So I won't bother rehashing them. I even linked to a version of the story so you can view it there.

What I did find interesting, though, was the small storm that erupted because of the malware software the Times used being directly named in the article and the publicity that it generated, most of it negative. I have had dealings with Symantec, along with several other security/malware/antivirus solutions, and upon reading that there were 40-plus pieces of malware created to infiltrate the Times in one way or another and their Symantec software caught approximately, oh, one of them wasn't much of a surprise to me.

But apparently this is still news.

In terms of dealing with this, I found the fluffy public relations face rather amusing. The article recounting events mentioned Symantec in passing; not a directly attack on the company. But merely mentioning the name put a face upon which to plant a black eye. While probably accidental, it was nice of them to be candid about it while accidentally making the company look rather incompetent.

Symantec wouldn't, at first, comment, and I thought their initial reaction on Twitter was rather...strange. Didn't they realize how they looked in the news story? A company using their Enterprise solution (I'm assuming, given their size) with not-so-cheap licensing associated with said product (no solution with the word "enterprise" is cheap) had over 40 malware applications get into their network and your product caught one of them. And yet, Symantec said this:

There's some irony to the order of these tweets.
That tweet was rather...bland, don't you think? Perhaps the press release was more interesting. A fiery defense of the company? Acknowledgement of weak points in their software? From the article:

"Advanced attacks like the ones the New York Times described in the following article, (http://nyti.ms/TZtr5z), underscore how important it is for companies, countries and consumers to make sure they are using the full capability of security solutions. The advanced capabilities in our endpoint offerings, including our unique reputation-based technology and behavior-based blocking, specifically target sophisticated attacks. Turning on only the signature-based anti-virus components of endpoint solutions alone are not enough in a world that is changing daily from attacks and threats. We encourage customers to be very aggressive in deploying solutions that offer a combined approach to security. Anti-virus software alone is not enough."

...So the problem was that the Symantec software would have been effective, but the Times didn't use all the software features to detect the malware. In other words, our customer was too stupid to fully use our product.


It didn't take long for others to notice this response and criticize Symantec. Somehow Symantec was still trying to spin this in a positive light for themselves.



I'm not a marketing person, but I'm not sure implying "Our customers are morons" is a good public defense.
I can understand what Symantec is saying, even if I'm not sure I'd have framed the reply this way.  I would think blaming the customer, even though they may legitimately feel this way and there is probably more the customer could have done to try to protect themselves, is usually not going to make you look good. The fact their last tweet I quoted above is a link to their entire software suite just conveys the message (to me) that if you don't want what happened to the Times to happen to you, you just need to buy more of our stuff!...doesn't seem effective.

On the other hand, protecting your network and your users is hard.

In the old days, viruses tended to be written by clever malcontents eager to show their technical prowess. Viruses were a way to display their programming ability while at the same time showing their hatred for non-technical people who dared to bring their non-geekhood to a domain ruled by geeks. The basic idea was that if you were stupid enough to get a virus, it was your own fault for not knowing how computers worked so you deserved what you got. Their software carried an implicit message with every infection:

Non-geeks are not welcome here.

But computers were becoming more mainstream and non-geeks weren't going away.

Somewhere along the way viruses went from becoming a nuisance to becoming something more sinister. Black hats learned that stupid people had money! The behavior of viruses evolved until they were no longer technically viruses, but rather "malware;" they relied on social engineering and software flaws to spread rather than self-replicating code, and the target was less the computer and more the person using the computer. If you knew the computer was "infected", that was an accident, whereas in the golden age of viruses the programs often announced their presence with pride.

Much of the malware out there now is backed by organized crime and State-sponsored campaigns. These groups will pay individuals or groups to orchestrate attacks to farm naive or ignorant users into running programs that will then target a user for spammy and intrusive ads, redirecting your web browsing to ad-ridden websites that may contain more malware, tracking your keystrokes to intercept passwords to banking websites,...all sorts of fun things.

 As you can probably guess, the antivirus industry is quite lucrative, and have created a kind of arms race with malware authors. In the beginning the cycle of war was pretty simple; virus author created a new virus and released it into the wild. Antivirus vendors got a sample, reverse engineered it, found a "signature" sequence of code in the executable that was unique to the virus, then they updated their product for clients. The Antivirus product then scanned every program you ran on your computer and if anything matched that unique string of code, it flagged it as a virus and sometimes would try to clean your computer.

One step forward for virus authors matched by one step forward by AV vendors.

Virus authors fancied themselves clever, so they needed to find clever ways to beat AV vendors.

That's when we started seeing viruses that incorporated encryption as well as adapting in memory to alter themselves so you couldn't find a single simple signature. AV vendors had to react and find new techniques for deconstructing these polymorphic viruses.

Second step from virus authors...second step from AV vendors.

The point: clever people with time on their hands are obsessed with the challenge of finding new and creative ways to be destructive and/or profit from people.

This little lockstep war continues today. It's reached a point where the possible attack surface (the places where unauthorized users or code can be run) against a potential target is huge, and as our society continues to become more connected through the Internet the surface continues to get worse (or better, depending on which side of the fence you're on.) Computers, cellphones, our cars, printers, security cameras, televisions, disc and media players, even home appliances like refrigerators, air conditioners and thermostats are accessible over networks.

That baby monitor you installed to watch the crib from your computer? Did you forget to use a long, secure password? I bet the wireless connection was a lot more convenient than having to run a wire. But you did securely encrypt it, right? Since your wireless signal could be intercepted a house away...or from the street...or farther, if someone used a directional antenna?

It's really neat that you can connect your phone to your car. Handy, especially in states where it's illegal to use your phone without a hands-free connection and $DEITY knows you HAVE to take that call from your boyfriend the moment he calls. But did you change the default connection sequence to marry the bluetooth in the car to the phone? Are you even able to change it? Because someone did write a program for clever techs to use a laptop for connecting to nearby bluetooth systems. It's fun to stream porn audio into unsuspecting schlub's cars on the freeway. Or listen in through the car audio system.

The point: there are ways for malware to get into your systems that you may not even be aware of.

Secondary point: The things that make our lives more convenient can be used against you.

The security industry now relies on a variety of techniques to try closing the holes in the potential attack surface.
Vendors rely on signatures, heuristics, behavior analysis, probabilistic analysis of email and web pages via proxy scans, along with good practices in firewalling connections and locking users down to accessing only the things they actually need to use on their computers (keeping users from being able to install updates to Word or new programs also means they can't accidentally install malware.)

Users, of course, tend to hate this because security measures come at a cost. Malware scanners use CPU and memory while they check every program being accessed, slowing down the computer. Proxies intercepting your web browsing and email to analyze the content for spam or embedded malware sometimes go wonky and end up messing up your email or creating web browsing quirks. Locking down the computer access privileges means you end up waiting hours or days for software updates or programs to be installed that would have taken a few minutes if you could do it on your own.

Users hate this. They just want to get their work done and just want their systems to work. This stuff gets in the way. And when security people do what they're supposed to do, they make the lives of their users more miserable; thus users being to hate their system administrators even more. It's a cycle of antagonism.

Point: security is a balancing act. You can have it really secure or really usable for users.

Most of the malware out there is kind of generic. These crime syndicates trying to steal your money or browsing habits (or control of your computer) cast a wide net and are pretty content with the replies they get; this is why you normally get laughably horrible emails filled with generic messages offering you tons of cash in exchange for contact and banking information. Malware often comes in the form of code on hacked websites that waits for you to find the webpage and asks you to install a plugin that isn't really what it reports it is. The weak point is the social engineering of the user; we tend to be trusting of things we don't want to think about beyond the immediate future.

If I want to see boobies I need to install this plugin? Okay! <click>

<dialog box pops up> words...words...words...whatever. <click>

<email comes up asking you to run an attachment.> Blah blah. Okay, whatever. <click!>

People aren't just trusting, but we do things that are blatantly dangerous or stupid if it means getting some kind of payoff. When a company does put in generally good security policies it still falls down when users are willing to give away their passwords to anyone who says they're from IT and need your password to test something.

In fact, a study found that users were willing to give up passwords for a chocolate bar (although it's a valid point to say that there wasn't any indication whether these passwords were tested for validity.) There are also cases where USB drives left in parking lots were taken and plugged into systems with little thought of whether there was malware on them.

Point: Users are the weakest point of any security policy, and social engineering can be a powerful attack vector.

Unfortunately with technology we still have to trust someone at some point.We end up needing to trust that someone more skilled or knowledgeable is doing the right thing for us, or acting in our interests, in areas in which we lack skill or knowledge.

Of course in many, if not most cases, we abdicate responsibility for these domain-specific areas of knowledge; we don't want to deal with it. This is understandable when you look at the complexity of our society today, I suppose...

If you read this far...

 ...this is where things tie together a bit. See, I sort of understand the difficulty the Times IT crew faced because they made themselves a target.

Usually malware is sort of out there, like a poisonous jellyfish in the ocean waiting for prey to happen into it. But the Times was running a story on someone that was a big name in China. And China is known for sponsoring targeted "cyber-attacks" (to be fair, this has been long rumored for the US and its allies as well. I'm just focusing on China because it is alleged they were behind the New York Times attack.)

When you get into becoming a named target, things get worse. Much worse. Because you are targeted for a custom attack. You're no longer a target of opportunity; you are a target that is researched, and a breach means tendrils of back doors being installed and user activity being actively monitored.

The network gets scanned and probed. Your employees are researched, and emails come in specifically addressed to specific employees with malicious code embedded (or more likely, links to malicious code.) Maybe they had a meeting with someone who was set up to hand over a drive with malicious code. Or maybe someone got a device sent to them for testing that contained trojan-horse type code that went to work as soon as it was connected to the company network.

Once there is some kind of hook into a computer, software can be installed and run that will scan the network from the inside. A military sponsored attack means that when they find something connected with a vulnerability, custom code can be created to create a back door into that system again; for example, installing malware on a particular brand of printer.

Yes, it's possible for a printer to have custom code embedded into it for attacks.

Emails get monitored, maybe forwarded or copied without your knowledge, leading to more information being leaked and another user that can be targeted with possibly better access privileges.

Malware monitoring relying on signatures would be useless if there's software being custom-crafted to attack you. If there is a device running on your network that isn't monitored directly, the only way to detect it is to have intrusion detection at the border of your network, or devices watching for suspicious network behavior to alert administrators, and if the attackers are aware of what you're using for defense (which they'd know, for example, that you're running Symantec the moment they pull a list of running programs from an infiltrated system) they can create software specifically meant to bypass the malware scanners in use.

Worse, once a system is infected, it's nearly impossible to know with 100% certainty that you've completely eradicated the intruders. Clean a workstation with a complete reformat and reinstall only to discover that the intruders managed to reinfect it because you didn't realize that laser printer was also allowing remote access to your network...very frustrating, to say the least.

People tend to think that they install antivirus software and they're safe. They're not. Security is a process with several layers, and there are many factors to consider in the great set of tradeoffs between security and usability. So the fact that Symantec detect one piece of malware out of over 40 programs used to attack the New York Times isn't really surprising to me. Symantec's response, to blame the customer for not having more monitoring and alerting mechanisms in place, is valid in that it may have helped to some degree but I doubt it would have stopped this attack.

On the other hand having a completely secure environment would likely have been a management headache as well as a miserable environment for the users to try to actually get a product out the door. Sometimes I think software vendors in a certain industry develop a myopia to this aspect of their product in the real world.

In the end Symantec took a bit of a black eye for being named. I have my gripes with their security products...several several gripes...but part of the problem is just the environment in which security software must co-exist and operate and blame can't be entirely laid at their feet.

Security is complicated. End users misunderstand it. And vendors, in their zeal to sell products, misrepresent the issues involved. If you're a company that may draw a giant target on your back, it's worth your trouble to hire people focused on computer and network security to work in your IT team, lest you, too, end up making the news for the wrong reasons...

No comments:

Post a Comment