Skip to main contentdfsdf

christopher bischoff's List: InfoSec Perspective

    • Active Defense," has many definitions and should not be strictly equated to hack back.  Hack back, instead may be considered a subset of "Active Defense," which does include cyber self-defense or cyber self-help.  Whether or not a company can utilize these theories depends entirely on the given facts of a situation.
    • My draft definition of "Active Defense" (still a work in progress) is as follows: "a meticulous and escalated approach to a persistent cyber attack wherein the company leadership makes a decision whether or not to progress at pre-determined decision-points, evaluating risk, liability and legal issues."  Each decision-point will include all of the intelligence gathered, all potential options, tools, techniques, possible scenarios, potential risks, liability, and legal issues.  Depending on the facts and the confidence of the decision-maker there can be few decision points or many.  The number of decision-points is also a factor to consider in the scenario and the actual amount of liability, if any, may depend on how meticulous and cautious the decision-maker acted.  For example, the first decision-point may be whether the attack(s) is or are persistent.  "Active Defense" is very fact dependent.
    • The premise is clear: the information security community has the know-how and capability to prevent many of the simple attacks that result in breaches. Consistent, disciplined execution of these preventative measures is where we struggle.
    • characterize the threats we are dealing with: 

       >> Rogue employees: trusted individuals who exceed their authority for personal gain or to deliberately damage the organization. 

       >> Accidental disclosures: trusted individuals who accidentally damage the organization through inadvertent misuse of data.  

       >> Risky business process: a potential leak due to a business process that is either poorly secured or against policy (but for legitimate business reasons). 

       >> External attacker on the inside: an external attacker who has penetrated the organization and is operating internally. This threat actor might have compromised a trusted account and appear like an internal user.

    • For example, when dealing with potential rogue employees, you tend to rely more on monitoring technologies over time. You don't want to interfere with legitimate business activities, so we use policies in tools like DLP to track mishandling of sensitive information. And how the employee extracts the data also tends to follow a pattern. They aren't necessarily technically proficient and will often rely on USB storage, CD/DVD, or private email to extract data. They usually know the data they want before the attack.

    2 more annotations...

    • 10. Privacy (here) Big brother is watching

       

      There is little doubt that advances in technology have radically changed many aspects of our lives, from healthcare to manufacturing, from supply chains to battlefields, we are experiencing an unprecedented technical revolution.

       

      Unfortunately, technology enables the average person to leak personal information at a velocity that few understand. Take a moment and think about how much of your life intersects with technology that can be used to track your movements, record your buying patterns, log your internet usage, identify your friends, associates, place of employment, what you had for dinner, where you ate and who you were with. It may not even be you who is disclosing this information.

       

      We live in a world without secrets and we must act accordingly. Realize that much of what you may think is confidential, isn’t. To borrow an old saying if more than one person knows something it isn’t a secret and if you’re alive today, you have very little privacy.

    • 9. Advanced Persistent Threats (here) Alarming people throughly

       

      Advanced persistent threats are real. As hackers moved from hobby-based malware and cyber-vandalism to financially motivated, or state-sponsored hacking we experienced more thoughtful and controlled approaches. APT isn’t a new class of threat that requires a whole new disparate set of technologies to address. In fact many of the technologies you have been using to identify and monitor deviations from normal operating state are suited to provide a base level of visibility into the environment.

       

      Remember, 90 percent of all external attacks take advantage of poorly administered, misconfigured, or inadequately managed systems that any moderately competent hacker can exploit. Sure, there are some real artists out there, but when you can take candy from a baby 90 percent of the time, you rarely need expert safecrackers.

    5 more annotations...

    • If we want to really improve security and produce sensible results, it’s time for us to wake up to reality and deal with security without unrealistic expectation
    • IT changes are almost never implemented as “Big Bang” projects. There is always a phased approach. Paretto is always being applied, 80% of the bad stuff being removed fairly soon and the remaining stays around for a long time. An isolated situation like that wouldn’t be an issue, but in medium and large organizations we can see dozens of cases of older, unsupported, often unsecure technology, configurations, processes, just refusing to go away. That’s the nature of things and I can’t see that changing soon

    2 more annotations...

    • came to the conclusion that the security community may have reached an awkward age at which we're grown up enough to be focusing on the golly-gee/whiz-bang/cool stuff (vis-à-vis the "APTification" of all that passes for security discussion) and, as a result, we're neglecting the basic, "Security 101" stuff that raised the bar in the first place.
    • Over the past year, how many high-profile hacks have been the result of awesome cutting edge skillz?  How many have happened because someone just flat-out did something dumb?  Take a quick gander at back issues of SANS NewsBites and I think you'll be convinced as well: We truly are neglecting the basics.
    • I’ve seen the attitude promulgated that if you’re smart and have skillz, it’s okay to be an asshole.  That it’s somehow okay to hurl insults under the guise of “educating” someone and that they should be grateful for it.  That caring about something gives you permission to display your bad temper for all to see, because you’ll make up for it by doing something really cool.

        

      As far as I’m concerned, nothing could be further from the truth.  There are plenty of egotists in the industry who think they’re entitled to a free pass on manners, and when I’m hiring, I steer clear of them, because there are just as many genius-level hackers who can also manage to behave themselves and work cooperatively with others without starting brawls

    • There is absolutely no need to sully enlightenment, integrity, openness and honesty by adding rage (and let’s call it what it really is:  a temper tantrum).  Every honorable goal that security professionals have – be it research, defense, development or education – can be achieved without stomping on fellow humans in the process.  Age does not confer the right to bully others under the guise of “educating” them; nor does any level of experience or knowledge.  No matter how much you’ve contributed to the state of security (or think you’ve contributed – watch that ego again), you still don’t get a pass on any bad behavior, and your lack of social skills is not a badge of honor.  Every industry has its members whose actions make the rest look bad, but at least we shouldn’t be glorifying them.  We have better options right in front of us.

    1 more annotation...

    • The roots of calling people “users” are likely harmless and simple: when computers were new, expensive and in limited supply, only a handful of people actually used the system. As a result, it probably made sense to consider those folks as computer users… eventually shortened to “users.”

       

      Today the situation is different.

       

      Somehow this notion of “users are losers” (sometimes written as lusers) transcended drugs and became part of technology. When technology and security practitioners refer to people as users

    • The word “user” is a label that instantly strips a person of their identity and objectifies them in a way that creates distance and ultimately prevents us from serving their needs.

       

      Distancing ourselves through language and labels is an unintended protection mechanism (I wrote about this in a 2007 column claiming “It’s time to reboot the security industry”) that reinforces our knowledge, experience and power while shielding us from the knowledge, power and experience of the individuals we work with.

    5 more annotations...

    • look at addressing the challenges of protecting information
    • “People have been unintentionally and systematically disconnected from the consequences of their actions for so long, they are no longer held accountable or take responsibility,” explains Michael. “The real key to protecting information is to engage people in the process and support them with the right tools and technology.”

    1 more annotation...

    • we have been presented a series of reports, complete with statistics, suggesting the cause of security breaches is people. Whether external attackers taking advantage of individuals, insider mistakes or even insider espionage, the overly simple and false conclusion seems to be that people are the problem.
    • “breach” (no matter how it is defined) is a symptom. So focusing on preventing security breaches basically creates a losing situation where valuable time, money and other resources are wasted… only to leave the real challenge untouched.

    4 more annotations...

      • There are a few principles I like to keep in mind when discussing the insider threat. Some are a little redundant to make a point from a slightly different perspective:

          
           
        1. Once an external attacker penetrates perimeter security and/or compromises a trusted user account, they become the insider threat.
        2.  
        3. Thus, from a security controls perspective it often makes little sense to distinguish between the insider threat and external attackers- there are those with access to your network, and those without. Some are authorized, some aren’t.
        4.  
        5. The best defenses against malicious employees are often business process controls, not security technologies.
        6.  
        7. The technology cost to reduce the risks of the insider threat to levels comparable to the external threat are materially greater without business process controls.
        8.  
        9. The number of potential external attackers is the population of the Earth with access to a computer. The number of potential malicious employees is no greater than the total number of employees.
        10.  
        11. If you allow contractors and partners the same access to your network and resources as your employees, but fail to apply security controls to their systems, you must assume they are compromised.
        12.  
        13. Detective controls with real-time alerting and an efficient incident response process are usually more effective for protecting internal systems than preventative technology controls, which more materially increase the overall business cost by interfering with business processes.
        14.  
        15. Preventative controls built into the business process are more efficient than external technological preventative controls.
        16.  
          

        Thus, the best strategy includes a mix of technology and business controls, a focus on preventing and detecting external attacks, and reliance on a mix of preventative controls and detective controls with efficient response for the insider threat. I really don’t care if an attacker is internal or external once they get onto a single trusted system or portion of my network.

          

        The “insider threat” isn’t a threat. It’s become a blanket term for FUD. Understand the differences between malicious employees, careless employees, external attackers with access inside the perimeter, and trusted partners without effective controls on their systems and activities.

    • we expect to see more awareness surrounding security incidents of an “insider job” nature. Attention will grow as a result of an increase in the number of incident reports that show data theft, and security breaches, tied to employees and other insiders.
    • UK ICO regulation, from April 2010, encouraged firms to engage affected individuals, or face a heavy (500,000 GBP!) fine. These laws and regulations, while generally discussing the need for security controls, place most of the emphasis on breach notification. In Germany, strict fines are imposed on companies that do not adhere to the privacy laws, which require the publication of all data breaches affecting individuals. A surge in notifications, which will result from an employee accessing data in a way that violates business policy, is bound to occur

    2 more annotations...

    • Have you noticed that we’re spending more and more but we’re not getting any more secure? Vendors are selling us the latest solution to fix the latest ill…  DLP, PKI, IDS, IPS, Endpoint protection, UTM – an alphabet soup of solutions to address an alphabet soup of regulations, litigation and compliance pressures… PCI, HIPAA, DPA, GLBA, etc. But the threats are changing and the impact is becoming more significant.
    • The bad guys are bypassing our firewalls. Anti-virus is deployed but malware flourishes. Identity management expands in complexity and capability and yet unauthorized (and even authorized) abuses continue.

    6 more annotations...

    • At #OWASPSummit: "Developers don't know bleep about security". Well, I got news. You don't know bleep about development.
    • Bottom line infosec has been delivering the same architecture for 15 years, and is no position whatsoever to throw stones at developers.

    1 more annotation...

    • The best SSG members are software security people; but software security people are often impossible to find. If you must create software security types from scratch, start with developers and teach them about security. Do not attempt to start with network security people and teach them about software, compilers, SDLCs, bug tracking and everything else in the software universe. No amount traditional security knowledge can overcome software cluelessness.
    •   I call it the He Got Game rule. When Spike Lee was making the movie He Got Game about a burgeoning NBA basketball star, he had a choice. He could cast a Hollywood actor like Robin Williams and try to teach him to play realistic looking basketball. Or, they could cast a basketball talent like future NBA all star Ray Allen and teach him how to act. Spike Lee chose ray Allen because he quickly came to the conclusion that it was much easier to teach a basketball player how to act than to teach an actor to play basketball. Its the same in security, its much simpler to teach motivated developers how to threat model, how WS-Trust protocol works, and so on, than it is to teach even motivated security people who are operationally focused on how Websphere, Apache, Directories and other core software technologies are built.

    1 more annotation...

1 - 16 of 16
20 items/page
List Comments (0)