There are many factors which affect scan time. Some information on the most significant factors, and ways you can alter them, are discussed in this article.
Scan Scope Size
Given a large number of targets a scan may face extended scan completion times. Therefore to reduce scan completion times, it is advisable if possible to split a scan up into individual smaller scans scheduled to run at different times.
Scan Hub Contention
The scheduler tries to pick the least busy hub for a scan at the time the scan is run, however once a scan is started on a hub then it cannot be moved. Some of our larger customers can fire off a few big scans and that can each consume the resources of a hub. We try and aim towards 50% utilization at anyone time but sometimes that can mean that a particular hub can be busy. We are working on splitting larger scopes up further as well which should improve this in the future.
Application Response Times
Response times can also cause excessive scan times - if an application becomes unresponsive, it can cause a delay of a couple of seconds to every request. For a request that normally takes 200ms, if this then becomes 2.2 seconds there has been a tenfold (1000%) increase in scan times, which will clearly have a significant impact on the scanning time.
In addition to the application itself being slow to respond, some WAF and IDS devices can cause a requesting IP to back off request rates, further slowing scans.
Extensive Web Application Footprint
For a large web application, the generated leaf graph (attack surface) will be significant, and some checks may need to execute on every leaf discovered. This can lead to a significant number of HTTP calls needing to be made.
If you wish to artificially restrict the scope then you can try a few different things:
- Limit the crawling by time, by setting the config flag 30_min_crawl in your scan settings
- Turn off "brute force discovery" to turn off brute force guessing of paths and rely on crawling via measures such as site navigation/response parsing and robots.txt examination.
- In extreme situations, you can turn off default crawling and replace it with just a number of defined GoSCript seeded journeys only.
Duplicate Scan Targets
It is possible to configure your scan targets in such a way as to scan the same application multiple times within one scan. See this article for an explanation:
UDP Port Scanning
If an infrastructure scan is configured to scan all 65k UDP ports then this will take some time due to the way that UDP works - there is no state mechanism so the sender has in the event of no response being received to wait for a timeout and then perform retries in case the packets were lost. For more details, see also Port scanning - things to consider
UDP Port Scan Timings
If UDP port scanning is enabled, this option controls how long to wait for a response before assuming the port is dead and moving on, longer timings should lead to more accurate results but the scan will take longer
to complete. More aggressive timings will mean a scan completes faster but there is a chance of missing some services if no response is received in time.
Platform-Specific Checks for Platforms You Do Not Use
The AppCheck scanner includes plugins aimed at specific platforms, such as certain types of database (MySQL, Postgres, etc) or CMS (eg WordPress, Umbraco, etc). These can be found in:
Web Application Scanner Settings/Plugins
Running these plugins can involve a lot of requests and substantially increase the time it takes to complete your scan. If you are certain that a platform does not exist within your environment then you can disable checks aimed solely at that platform.
SQL Databases & SQL Injection
In the case of SQL Injection detection there is one plugin which can be configured with options for various database platforms.
Do not click the tick-box to disable the plugin unless your environment contains no SQL databases at all. Instead, you'll usually want to configure the plugin to select which database platforms it should target. Click the name of the plugin to open its settings panel, then the "Configuration" tab, then click the down arrow net to "Supported Database Platforms". Here you can untick any that you do not use. Then close the settings panel and don't forget to save the scan. Make sure you only untick platforms that you definitely do not use - sometimes there can be a communication chain starting with your application which reaches other parts of your environment and interacts with database platforms you might not immediately think of when considering your one application in isolation.
This is one of the biggest savings on scan time. Even if you do use relational SQL databases, you could consider running a scan with and without this plugin, I sometimes run a “SQLi only” and “All except SQLi”. The nature of SQLi detection to the accuracy we do it means that its just slow by nature, there’s no way around that without risking missed cases.
Command Injection
This is a rare vulnerability but an expensive plugin. I’d consider disabling this and then running it in a separate longer running scan if that suits the objectives. If not then of course leave it enabled.
Path Traversal
As above. This is more common but its long running, you could cut this out and run it within a slower scan that perhaps runs less frequent.
HTML5 Category
Disabling all of the PostMessage and Websocket checks will help speed up the scan. This is another rare finding with a maximum cvss of 4.3
Content Management Systems (C)S
You'll find plugins aimed at various CMS platforms in the "Content Management Systems" section. You can simply untick the plugins for platforms you definitely do not use. Again, do no forget to save your scan after doing so.
Build and Configuration Review Category.
The “JavaScript Library Validation” plugin reports on out dated JavaScript libraires. It uses a browser which is has an overhead. If you don’t need to know about outdated jQuery etc in every scan then this one can be disabled for a speed bump.
Other Scan Settings
Other scan settings can have an impact on scan time. By eliminating some of these checks and avoiding edge cases, the scan time can be improved upon:
- Forced discovery, this option controls if the scanner is to attempt to discover resources
that would not normally be discovered during in crawl, looking for potentially hidden
dangerous paths such as .git and .svn repositories - DOM XSS checks, this option tells AppCheck-NG to use a real browsers to detect and confirm XSS vulnerabilities. These checks can be very expensive, increasing the scan time and disabling this is not recommend but by switching this option off scan time can be decreased.
- Scan REST paths, with this option enabled AppCheck-NG will scan the URL paths for vulnerabilities, this is only applicable when using a routing system for an application and if paths are confined to actual folders it isn't required.
- Scan parameter names, this option instructs AppCheck-NG to scan the names of parameters for vulnerabilities and not just the values. With this option enabled the attack surface of all parameters is doubled.
- Scan referrer headers, with this option enabled AppCheck-NG will attack the referrer header of an application.
- Scan Cookie headers, with this option enabled AppCheck-NG will attack the cookie header of an application.
- Scan user agent headers, with this option enabled AppCheck-NG will attack the user agent header of an application.
- Scan all other headers, with this option enabled AppCheck-NG will test every header of an application.
Advanced Config Options & Hidden Flags
low_hanging_vulns
This config flag prevents the scanner from running a discovery phase of the scan. So the attack surface can only be determined through crawling and no hidden content will be discovered. The crawl time is also limited which drastically reduces the potential attack surface of a scan. During the attack phase the scanner only looks for common cases of vulnerabilities and does not go through edge cases, making the scan a lot quicker.
Comments
0 comments
Please sign in to leave a comment.