Skip to main contentdfsdf

    • I have always believed that in order for security to become an inherent part of software development it must come from within the development community itself.

       
       

      We can’t have security people who know development. We must have developers who know security. There is a fundamental difference and it is important.

    • Cenzic as an example. This is a firm that was founded by the same people that founded HB Gary. Yes the same firm that has been exposed to have been plotting a campaign to discredit wiki-leaks. Cenzic also have a patent for web fuzzing. Now I am not a lawyer but this patent appears that it could be applied against OWASP projects like WebScarab at any time. This is the same firm that used to claim in their marketing that they scan for the OWASP Top Ten. Thats right using HTTP they scanned for insecure crypto! These are my personal opinion but this is not a firm with good ethics yet is actively involved in OWASP.

    2 more annotations...

    • BSIMM and touchpoints do not go down and dirty to figure out how to actually make software secure.

      And frankly, that’s what the entire world really needs right now.

       
       
       
       
    • Injection attacks are one of the most common severe attacks a website could face and security professionals tend to prescribe input validation when output encoding is more appropriate. When passing on data to other sources (the client, a database, other servers), transferring it via a safe method in a sanitized form is the optimal way to go. Input validation in this instance is sub-optimal.
      • AppSec Manifesto
         
        • Developers are not lazy, rather quality oriented ⇒ Insecure applications do not stem from laziness.
        •  
        • Features and functions are always more important than security ⇒ Security should enable more features and functions.
        •  
        • Responsible disclosure is a good way of achieving more secure applications ⇒ Hackers are needed.
        •  
        • Technology is a crucial part of security ⇒ Therefore I keep coding.
        •  
    • any category of web application injection attacks, most folks in the field almost instinctively begin talking about "input validation". Sure, input validation is important when it comes to detecting certain attacks, but encoding of user-driven data (either before you present that data to another user, or before you use that data to access various services) is actually a great deal more important for truly stopping almost any class of web application injection attack.
    • While input validation is still crucial for a defense-in-depth application security coding practice; it's truly the encoding of user data that becomes your final and most important line of defense against XSS, SQL Injection (PreparedStements and binding of variables actually encoding user-driven data specific to each database vendor via the Java JDBC driver), LDAP injection, or any other class of injection attack.

    4 more annotations...

    • Many web sites have SQL-injection and XSS (Cross Site Scripting) vulnerabilities, and security articles often mention lack of input validation as the reason for these problems. This isn't necessarily correc
    • The metacharacter problem
      Both SQL-injection and XSS are metacharacter problems. A metacharacter is a control character used in a part of the system to control the display or flow of data. These problems occur every time a system communicates with a system of a different flavour, be it a browser, a database or a legacy system.

    2 more annotations...

    • Advocating input validation as the first line of defense against injection vulnerabilities is like Microsoft forbidding users from typing “for”, “if”, or “while” in Office Word in fear that a victim could execute arbitrary code by feeding a Word document to a compiler. It’s like preventing race conditions by carefully avoiding to spawn any threads, or preventing buffer overflows by ensuring that all buffers are “kept small”.
    • Injection vulnerabilities – i.e. those nifty vulnerabilities like cross-site scripting (XSS), SQL injection, XPath injection, OS command injection, and so on – are caused by one simple thing: the dumbness of a developer not seeing that he’s using one channel (i.e. an HTML page, or a SQL query) to convey two types of content (i.e. rendering directives and text content, or SQL statements and literal parameters).

    3 more annotations...

    • We belong at level six and unless we appreciate and understand how security fits in with functions, performance, usability, uptime, and maintainability we will keep being ignored by developers
    • a featureless system is useless. A security feature that hits performance notably is out. A system with poor usability will bring no business so usability is above security. "Uptime, hey that's a security thing!" No. Just because DoS attacks hit your uptime doesn't mean we own the issue. Many things affect uptime such as release and deploy cycles, maintainability, caching, scalability, configuration, and patching (no, not just security patching). Finally, maintainability affects ROI much more than security in the general case. Thus, security == level six.

    1 more annotation...

      • McGraw taxonomy has the following 7+1 classification:
         
        • Input Validation and Representation
        •  
        • Api Abuse
        •  
        • Security Features
        •  
        • Time and State
        •  
        • Errors
        •  
        • Code Quality
        •  
        • Encapsulation
        •  
        • environment
      • The purpose is to build a root cause taxonomy for example the root cause of a security flaw according to CLASP can be classified depending the point of introduction into the SDLC in a hierarchical view.
         
        • Level 1: Identify Range and Type of Errors:Part of Level 1 you might have buffer overflows (introduced during Requirements, Design and Implementation), Command Injection (design and implementation), Double Free (implementation)
        •  
        • Sub level 2: Identify Environment Problems:Part of sub level 2 you might have resource exhaustion (design and implementation)
        •  
        • Sub level 3:Identify Synchronization and timing errors: TOCTOU, race conditions (design and implementation)
        •  
        • Sub level 4:Identify Protocol Errors:Misuse of cryptography (design)
        •  
        • Sub level 5:Identify Generic logic Errors:Performing a chroot without a chdir (implementation)
        •  
        The approach is effective in determining root causes of security flaws with emphasis on the when in the SDLC is actually might originate helping architects and developers to build security into the SDLC.
    • Threat modeling is a process for modelling security threats and identify design flaws that can be exploited by these threats so that systems can be securely designed and countermeasures implemented to mitigate these threats.
    • Requirements: Key stakeholders use white-boarding and collect information through worksheets to identify end to end deployment scenarios. The drawing of abuse cases allows the identification of negative scenarios and a preliminary threat analysis during requirement definition. Data classification can also be performed during requirements to drive the identification of potential threats to the data assets.

    3 more annotations...

    • Firewalls need to be on every ingress/egress point in the organization or they don’t solve the problem. Firewall technology has to scale to be manageable over every connection and work on every size pipe. Network vulnerability scanners have to scale to scan every system in the enterprise. Patch management solutions need to scale to manage every system with any OS. Likewise the only way to solve application security is to scale it to every release of every app.
    • we don’t just focus on the accuracy of our application security solution. We also focus on our solution working well at large enterprise scale. Our mission is to make it possible for an organization, no matter how large, to perform security testing on all apps: every release, from every source (in house, outsourced, vendor, open source), and on every platform

    2 more annotations...

    • One of the concepts that I think gets heavily overlooked in security is the idea of an assurance – the degree of protection a specific control provides. When speaking about assurance the discussion is how resilient a control is to attack; controls easily thwarted offer low assurance, controls very difficult to bypass offer high assurance
    • Right now AppSec is very tactical. We are trying to fix the immediate problems through developer education and add on security frameworks, and as a stop gap that’s fine.

    1 more annotation...

    • 1) Draw a data flow diagram (DFD) of how data moves around for your application/feature (depending on scope of the model), and where trust boundries are crossed

       

      2) Enumerate threats. The paper has a nice little matrix of what threats are pertinent to what elements of the DFD and while he applies the necessary conditionals about it being specific to MS and perhaps not being universally applicable, I think it is a pretty good reference in general.

       

      3) Determine Mitigations (or decide to let a threat slide)

       

      4) Validate mitigations that are put in place

       

      We lived by those simple steps, rather than getting into the horrid threat trees and DREAD risk modeling and all of that other overhead. I say this having been the security guy for my area in MS, supposedly the expert, and I ignored them. I can’t imagine the average non-security guy going to that trouble. Granted, being a security guy I probably find it a bit easier to subjectively (wait, am I supposed to say qualititavely now that I am a CISSP?) determine risk from a threat than put a number on it, which is what the overhead in threat modeling was supposed to do, but I think most non-experts can still make an educated guess.

    • below breaks down my thoughts on the type of security testing that can be integrated into each role (developer, QA, Security)
      • Development Team:
         
           
        • Use an automated static code analysis tool  (for example: Ounce, Fortify, or Veracode)
        •  
        • Perform peer code reviews
        •  
        • Construct and run unit/functional/automated tests for previously identified security issues using libraries like Watir/WatiN/Watij/Capybara/etc

    3 more annotations...

    • threat modeling is a secure development activity in which developers, architects, designers, security personnel, and sometimes managers consider possible attack scenarios or threats against an application. This process typically occurs during a planning or design phase of development before any code is written.
      • 1. Resource <- 2. Capability <- 3. Use Case <- User
         1. Resource <- 2. Capability <- 4. Threats <- Attacker

         
        1. Identify resources within the application, such as database tables, file systems, and application servers.
        2.  
        3. Determine the capabilities or actions that can be performed on each item. One example for a database table may be to read checking account transaction data.
        4.  
        5. Consider the proper way in which a valid user may invoke a capability. Following with the previous example, a customer may login to Online Banking and access their checking account details.
        6.  
        7. Consider the threats to a particular resource based on the associated capabilities. Threat modeling participants can base threats on defined use cases or by brainstorming in a free form manner. Example: An attacker may wish to access another user's checking account data.
        8.  
        9. Assign a risk level to each threat, such as high, medium, and low.
        10.  
        11. Define countermeasures based on the analysis of the risk level vs. the cost of implementing the solution

    1 more annotation...

    • A secure software development process cannot be built overnight. Organizations gradually adopt security activities based on factors like culture, customer demand, regulations, budget, and security incidents. Each organization adds security practices at different rates
    • the six stages constitute an appropriate secure software development roadmap, instead; it simply describes a common progression observed in organizations

    4 more annotations...

    • Taking the Time to Analyze Root Causes and Develop Standards
       
       Now that the fire is out (the issues are fixed), let's take some time to understand how the vulnerabilities were created in the first place. Was it a result of missing output encoding practices, inconsistent page-level access controls, or some other issue? Gather a list of root causes that resulted in the identified weakness.
    • 1) Be careful when using cross domain messaging features
      HTML 5 APIs allow to process messages from an origin that is different from the one of the application processing the message. You should check the origin of the domain of the message to validate that can be trusted such as by whitelisting the domain (accept only request from trusted domains and rejects the others). Specifically, when using HTML 5.0 APIs such as PostMessage(), check the dom-MessageEvent-origin attribute before accepting the request.
    • 2) Always validate input and filter malicious input before using HTML 5.0 APIs.
      You should validate data input before you process any messages of HTML 5.0  APIs such as the PostMessage() API. Input validation should be done at a minimum, on the server side since client side validation can be potentially bypassed with a web proxy. . If you are using client side SQL such as WebSQL (like Google gears for example) you should filter data for any SQL injection attack vectors and use prepared SQL statements. 

    4 more annotations...

      • Validate input. If you allow a user to post anything to your website, make sure that you only accept the input you want. Does the field ask for the person’s name? Then only text should be allowed. Need an email address? Make sure the @ symbol is present. In both cases, any code should be filtered out.
      •  
      • Escape untrusted data. Most websites don’t require data, however for those that do, escaping data the right way will still allow it to be rendered in the browser properly. Escaping just lets the interpreter know that the data is not intended to be executed. When the data does not execute, the attack doesn’t work.
    • CSRF vulnerabilities occur when a website allows an authenticated user to perform a sensitive action but does not verify that the user herself is invoking that action. The key to understanding CSRF attacks is to recognize that websites typically don't verify that a request came from an authorized user. Instead they verify only that the request came from the browser of an authorized user. Because browsers run code sent by multiple sites, there is a danger that one site will (unbeknownst to the user) send a request to a second site, and the second site will mistakenly think that the user authorized the request.
      • That's the key element to understanding XSRF. Attackers are gambling that users have a validated login cookie for your website already stored in their browser. All they need to do is get that browser to make a request to your website on their behalf. If they can either: 

         

           
        1. Convince your users to click on a HTML page they've constructed 
        2. Insert arbitrary HTML in a target website that your users visit

    1 more annotation...

1 - 20 of 28 Next ›
20 items/page
List Comments (0)