-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Guideline for handling False Positives / False Negatives #75
Comments
Hello @maxmeffert and thank you for deciding to contribute! As you already noted, Electronegativity was born as a simple "naive" SAST tool. As for every SAST tool, its role should be supporting a manual auditor when reviewing the source code of a software to identify sources of vulnerabilities. In order to help less experienced tester, the team decided to create a documentation reference for every issue featured in Electronegativity, where the auditor can read the associated risks, reproduction/exploitation steps, and reference links of any reported issue. Nonetheless, I understand your struggle. As you pointed out, by design we're being very cautious when deciding to NOT issue a finding, deciding instead to lower its confidence or even flag it for manual review instead of discarding it altogether. Because of this I believe we do inherently have a much higher rate of false positives than false negatives errors in our checks, but that's a trade-off we accepted and are aware of, given that the tool was meant to be used mainly as a support to a manual auditor. With the desire to improve Electronegativity's accuracy, in recent years we recognized that we needed some sort of context-awareness or "data flow analysis" as you well put. This is why we encapsulated Escope (#40) to introduce a variable scoping / resolution mechanism to some notable checks (mainly the BrowserView/BrowserWindow options ones). This works most of the time on simple cases, but it still fails when the variables' value can't be determined statically (because assigned by non-pure functions, environment variables, or simply declared without an assignment detected). In bcae6bb I introduced a small change to the const configuration= {
webPreferences: {
contextIsolation: true,
},
};
const window = new BrowserWindow(configuration); While we don't currently support such recursive or convoluted resolution (as per your initial submission): var safe_settings = {
browserWindows : [
{
contextIsolation: true
}
]
};
const configuration= {
webPreferences: {
contextIsolation: safe_settings.browserWindows[0].contextIsolation,
},
};
const window = new BrowserWindow(configuration); While the changes introduced help covering (b), a SAST tool will never effectively solve your initial problem. I'm positive above adding a disclaimer detailing how we're generally deciding what to report, though (i.e. our better-safe-than-sorry approach).
This really varies check by check. Parts of the checks' behavior is described by the tests, but their logic can vary greatly depending on the different electron versions, other checks values, or check-specific considerations. While in the long run having such specs would be desirable, I can't see myself committing to this kind of effort in the near future. Having said that, the tool is always open for contributions by the community! :P Another solution for your use-case would be creating some kind of exclusion list for false-positive issues (not currently supported, but should be easy enough to implement). In this way, if you were to test the same codebase in a development/build pipeline you could be warned about new problems every time some new dangerous code is pushed. Let me know if this answers your questions, |
Hi @phosphore, In retrospective, I think, I filed this issue a little prematurely. My team and I where relatively new to electron so we started - among other stuff - by checking the Security Recommendations of electron where this project is referenced. There it states:
So we hoped to run scans as part of our CI/CD pipeline. However, as we realized that some cases resulting from our design choices could not be analyzed accurately or are simply not supported due to the inherent naiveness of static analysis we decided to:
Also I may have titled the issue misleadingly.
I may have accidentally mixed not supported cases with false negatives. Actually the main point of this issue was "how design decisions affect the outcome of the analysis", i.e. which code patterns are supported and which are not. Perhaps the wiki page for each check could list cases which are currently supported and a (naturally incomplete) description of cases which are not supported, i.e. when a misconfiguration cannot be detected yet. But I see that this can become a time consuming task. A quick "fix" could also be to address this issue in the introductory statement in the README since it currently only states to be careful with some things electronegativity can detect:
This could also help to reduce the risk of providing a false sense of security when in fact an application is configured in an insecure manner which simply cannot be detected by electronegativity. This is natural due to the challenges involved in thorough static analysis. However, less experienced developers may treat electronegativity as a magic black box. I consider this issue closed. Thanks again for this helpful project! :-) |
Guideline for handling False Positives / False Negatives
Hi,
is there a guideline for handling False Positives / False Negatives?
We are currently looking into using electronegativity for automated security checks in a project.
However, we noticed that our code design and architecture can have a great effect on the results.
For example CONTEXT_ISOLATION_JS_CHECK:
Case a)
Declaration of
webPreferences.contextIsolation
is part of theBrowserWindow
configuration object literal.No issue is found, works as expected.
Case b)
Value of
webPreferences.contextIsolation
is not part of theBrowserWindow
configuration object literal.CONTEXT_ISOLATION_JS_CHECK fails, which was not expected.
Case c)
BrowserWindow
configuration is extracted to a variable and possibly to another file for reuse.CONTEXT_ISOLATION_JS_CHECK fails, which was not expected.
The reason for this became obvious when we looked at how the CONTEXT_ISOLATION_JS_CHECK is implemented.
It just leverages simple static analysis using only the AST which works fine for a).
However, b) and c) would require some form of control flow or data flow analysis.
Since control flow and data flow analysis is heavy lifting especially for JavaScript projects, we don't expect this to be implemented soon (or ever).
For now, we are considering just using your checks for manual review without running electronegativity.
But a guide for handling false postive / false negatives and how design decisions affect the outcome of the analysis would be nice.
Nevertheless, many thanks for this project and security recommendations for electron!
The text was updated successfully, but these errors were encountered: