As part of their professional activities, pentesters, developers, and security specialists have to deal with such processes as Vulnerability Management (VM), (Secure) Software Development LifeCycle (S-SDLC).
These phrases cover different sets of practices and tools used, which are intertwined, although their consumers are different.
Technological progress has not yet reached the point of replacing a person with one tool for analyzing the security of infrastructure and software.
It is interesting to understand why this is so and what problems one has to face.
For our company, this is not only a subject of research and consulting task, but also the task that our product solves: Deteact Application Security Platform .
Processes
The Vulnerability Management process is designed to continuously monitor infrastructure security and patch management.
The Secure SDLC (“Secure Development Cycle”) process is designed to maintain the security of an application during development and operation.
A similar part of these processes is the Vulnerability Assessment process – vulnerability assessment, vulnerability scanning.
The main difference between VM and SDLC scans is that in the former, the goal is to detect known vulnerabilities in third-party software or configuration. For example, an outdated version of Windows or the default SNMP community string.
In the second case, the goal is to detect vulnerabilities not only in third-party components (dependencies), but primarily in the code of a new product.
This creates differences in tools and approaches. In my opinion, the task of searching for new vulnerabilities in an application is much more interesting, since it is not limited to version fingerprinting, banner collection, brute-forcing passwords, etc.
For high-quality automated scanning of application vulnerabilities, algorithms are required that take into account the semantics of the application, its purpose, and specific threats.
The infrastructure scanner can often be replaced with a timer: the point is that, purely statistically, you can consider your infrastructure vulnerable if you haven’t updated it for, say, a month.
Instruments
Scanning, as well as security analysis, can be performed both with a black box and with a white box.
Black Box
During blackbox scanning, the tool must be able to work with the service through the same interfaces through which users work with it.
Infrastructure scanners (Tenable Nessus, Qualys, MaxPatrol, Rapid7 Nexpose, etc.) look for open network ports, collect banners, identify installed software versions, and search their knowledge base for information on vulnerabilities in those versions. They also try to detect configuration errors such as default passwords or public data access, weak SSL ciphers, etc.
Web application scanners (Acunetix WVS, Netsparker, Burp Suite, OWASP ZAP, etc.) can also detect known components and their versions (for example, CMS, frameworks, JS libraries). The main steps of a scanner are crawling and fuzzing.
During crawling, the scanner collects information about existing application interfaces, HTTP parameters. During fuzzing, mutated or generated data is substituted into all detected parameters in order to provoke an error and detect a vulnerability.
These application scanners are classified as DAST and IAST – Dynamic and Interactive Application Security Testing, respectively.
White Box
There are more differences in whitebox scanning.
As part of the VM process, scanners (Vulners, Incsecurity Couch, Vuls, Tenable Nessus, etc.) are often given access to systems by performing an authenticated scan. Thus, the scanner can download installed package versions and configuration parameters directly from the system, without guessing them from the banners of network services.
The scan is more accurate and complete.
If we talk about whitebox scanning (CheckMarx, HP Fortify, Coverity, RIPS, FindSecBugs, etc.) of applications, then we are usually talking about static code analysis and using the corresponding tools of the SAST class – Static Application Security Testing.
Problems
There are many problems with scanning! I have to deal with most of them personally as part of providing a service for building scanning processes and secure development, as well as when carrying out work on security analysis.
I will single out 3 main groups of problems, which are confirmed by conversations with engineers and heads of information security services in various companies.
Web Application Scanning Problems
- Complexity of implementation. Scanners need to be deployed, configured, customized for each application, allocated a test environment for scans, and embedded in the CI / CD process for this to be effective. Otherwise, it will be a useless formal procedure, producing only false positives.
- Duration of scanning. Even in 2019, scanners do a poor job of interface deduplication and can scan for days a thousand pages with 10 parameters on each, considering them to be different, although the same code is responsible for them. At the same time, the decision to deploy to production within the development cycle must be made quickly.
- Meager recommendations. Scanners give quite general recommendations, and not always a developer can quickly understand from them how to reduce the level of risk, and most importantly, whether it needs to be done right now, or is it not scary yet
- Destructive effect on the application. Scanners may well carry out a DoS attack on an application, and they can also create a large number of entities or modify existing ones (for example, create tens of thousands of comments on a blog), so you should not mindlessly launch a scan in production.
- Low quality vulnerability detection. Scanners typically use a fixed array of “payloads” and can easily miss a vulnerability that does not fit into their known application behavior scenario.
- The scanner does not understand the functions of the application. Scanners by themselves do not know what “internet bank”, “payment”, “comment” are. For them, there are only links and parameters, so that a huge layer of possible vulnerabilities of business logic remains completely uncovered, they will not guess to make a double write-off, peep other people’s data by ID, or wind up the balance through rounding
- The scanner does not understand the semantics of pages. Scanners do not know how to read FAQs, do not know how to recognize captchas, by themselves they will not guess how to register, and what then needs to be re-logged, that it is impossible to press “logout”, and how to sign requests when changing parameter values. As a result, most of the application may not be scanned at all.
Source code scanning issues
- False positives. Static analysis is a complex task that involves many trade-offs. Accuracy is often sacrificed, and even expensive enterprise scanners generate a huge number of false positives.
- Complexity of implementation. To increase the accuracy and completeness of static analysis, it is necessary to refine the scanning rules, and writing these rules can be too time consuming. Sometimes it’s easier to find all the places in the code with some kind of bug and fix them than to write a rule to detect such cases
- Lack of dependency support. Large projects depend on a large number of libraries and frameworks that extend the capabilities of the programming language. If the knowledge base of the scanner does not contain information about dangerous places (“sinks”) in these frameworks, it will become a blind spot, and the scanner simply will not even understand the code
- Duration of scanning. Finding vulnerabilities in code is a tricky task in terms of algorithms as well. Therefore, the process may well drag on and require significant computing resources.
- Low coverage. Despite the resource consumption and scan duration, the developers of SAST tools still have to resort to trade-offs and analyze not all the states in which the program may be.
- Reproducibility of finds. Pointing to a specific string and call stack that leads to a vulnerability is fine, but in fact, often the scanner does not provide enough information to check for a vulnerability from the outside. After all, the flaw may also be in the dead code, which is unattainable for an attacker.
Infrastructure Scan Issues
- Insufficient inventory. In large infrastructures, especially geographically dispersed, it is often the hardest to figure out which hosts to scan. In other words, the scanning task is closely related to the asset management task.
- Bad prioritization. Network scanners often produce many results with flaws that are not exploitable in practice, but formally their level of risk is high. The consumer receives a report that is difficult to interpret and it is not clear what needs to be corrected in the first place
- Meager recommendations. In the knowledge base of the scanner, there is often only very general information about the vulnerability and how to fix it, so admins will have to arm themselves with Google. The situation is a little better with whitebox crawlers, which can issue a specific command to fix
- Handmade. Infrastructures can have many nodes, which means there are potentially many shortcomings, reports for which have to be disassembled and analyzed manually with each iteration.
- Poor coverage. The quality of infrastructure scanning directly depends on the size of the knowledge base about vulnerabilities and software versions. At the same time, it turns out that even the market leaders do not have a comprehensive knowledge base, and there is a lot of information in the databases of free solutions that the leaders do not have.
- Patching problems. Most often, patching infrastructure vulnerabilities is updating a package or changing a configuration file. The big problem here is that the system, especially the legacy system, can behave unpredictably as a result of the upgrade. In fact, you will have to conduct integration tests on live infrastructure in production
Approaches
How to be?
I will talk in more detail about examples and how to deal with many of the listed problems in the following parts, but for now I will indicate the main directions in which you can work:
- Aggregation of various scanning tools. With the correct use of multiple scanners, a significant increase in the knowledge base and the quality of the detection can be achieved. You can find even more vulnerabilities than the total of all scanners launched separately, while you can more accurately assess the level of risk and make more recommendations
- Integration of SAST and DAST. You can increase DAST coverage and SAST accuracy by exchanging information between them. From the source you can get information about the existing routes, and using DAST you can check if the vulnerability is visible from the outside
- Machine Learning ™ . In 2015, I talked (and more ) about using statistics to give crawlers a hacker’s intuition and speed them up. This is definitely food for the development of automatic security analysis in the future.
- Integration of IAST with autotests and OpenAPI. Within the framework of the CI / CD-pipeline, it is possible to create a scanning process based on tools that work as an HTTP proxy and functional tests that work over HTTP. OpenAPI / Swagger tests and contracts will give the scanner the missing information about data streams, make it possible to scan the application in various states
- Correct configuration. For each application and infrastructure, you need to create a suitable scanning profile, taking into account the number and nature of the interfaces, the technologies used
- Scanner customization. Often, an application cannot be scanned without reworking the scanner. An example is a payment gateway where every request must be signed. Without writing a connector to the gateway protocol, scanners will mindlessly pound with requests with an incorrect signature. It is also necessary to write specialized scanners for a specific kind of flaws, such as Insecure Direct Object Reference
- Risk management. The use of various scanners and integration with external systems such as Asset Management and Threat Management will allow the use of many parameters for assessing the level of risk, so that management can get an adequate picture of the current state of security of the development or infrastructure
Stay tuned and let’s disrupt the vulnerability scanning!