-
Notifications
You must be signed in to change notification settings - Fork 562
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scan Slowness #901
Comments
CPU during this time is 100%, and I suspect this is the issue. |
I heavily suspect the priorityqueues. EDIT: did some testing on priorityqueue and it is performant even up to very large sizes. I think the problem is that we're iterating through them in order to provide status messages. |
Removing the status messages did not speed up the scan. In my tests, iterating through 200K+ objects was basically instantaneous. CPU is still at 100%, so I think it will pay to do some profiling to find the top offending function calls. |
Ran a baseline test where I loaded a bunch of IP addresses from CPU was 100% during this time and cProfile reveals that the vast majority of CPU time was taken up by the
The thing to do now is to add DNS resolution and see if anything changes. |
On enabling DNS, CPU ceases to be the bottleneck. I suspect the answer here is just to run a giant internet scan and profile it. |
Ran a simple scan on a Linode with 2 CPUs and 4GB RAM: poetry run bbot -t targets.txt -f subdomain-enum email-enum cloud-enum Almost immediately the scan became unresponsive. CPU was at 0%, no memory usage to speak of. Pressing enter on the console had no effect, but after thirty seconds or so there would be a flurry of DNS timeouts relating to wildcard detection and the scan would register the keypresses, then it would return to being unresponsive. There is no good reason it should be behaving this way. I suspect it's something to do with the asyncio event loop getting overwhelmed; but I'm not sure how this is possible. I intend to start ripping out locks and rate limits until it starts to behave more properly.
|
Keeping track of my troubleshooting steps here:
This indicates that one of the modules is causing the issue, most likely it is holding up the event loop (naughty naughty!) |
Let's play a game: which of these modules is the naughty one?
dnszonetransfer....you motherfucker. |
Ran a big internet scan with
And the problematic files:
Lots of DNS-related stuff. Most of these were only .1 or .2 seconds, and CPU was bumping up against 100% during this time, so it's possible there isn't any bug here, and that the combination of high CPU and such a large volume of DNS requests was the cause of the delays. |
cProfile from a long scan:
|
The main issue was |
Reopening this because I realized something about the dns calls. At first I dismissed the high CPU time of the DNS-related functions because there were over 1.3 million calls to When we enabled |
False alarm on the DNS stuff. Closing, will reopen if needed. |
During some very large scans, the scan can gradually slow to a crawl. I ran into this issue
use_previous
from theasset_inventory
module, where after a certain point, the events slowed to the point of being output in small spurts of ~25 every 5-10 seconds.The text was updated successfully, but these errors were encountered: