“Benchmarking Static Analysis Tools for Web Security”

From Navigators

Revision as of 02:38, 10 February 2019 by Imedeiros (Talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Paulo Nunes, Ibéria Medeiros, José Fonseca, Nuno Ferreira Neves, Miguel Correia, Marco Vieira

IEEE Transactions on Reliability, vol. 67, no. 3, pp. 1159 – 1175, Sept. 2018.

Abstract: Static analysis tools are recurrently used by developers to search for vulnerabilities in the source code of web applications. However, distinct tools provide different results depending on factors such as the complexity of the code under analysis and the application scenario, thus missing some of the vulnerabilities while reporting false problems. Benchmarks can be used to assess and compare different systems or components, however, existing benchmarks have strong representativeness limitations, disregarding the specificities of the environment where the tools under benchmarking will be used. In this paper, we propose a benchmark for assessing and comparing static analysis tools in terms of their capability to detect security vulnerabilities. The benchmark considers four real-world development scenarios, including workloads composed by real web applications with different goals and constraints, ranging from low budget to highend applications. Our benchmark was implemented and assessed experimentally using a set of 134 WordPress plugins, which served as basis for the evaluation of five free PHP static analysis tools. Results clearly show that the best solution depends on the deployment scenario and class of vulnerability being detected, therefore highlighting the importance of these aspects in the design of the benchmark and of future static analysis tools.


Export citation

BibTeX

Project(s):

Research line(s): Fault and Intrusion Tolerance in Open Distributed Systems (FIT)

Personal tools
Navigators toolbox