-
Notifications
You must be signed in to change notification settings - Fork 561
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Passive Subdomain Enum takes forever on large domains #1045
Comments
@lappsec thanks for opening an issue. This kind of feedback helps a lot in speeding up BBOT and making it as efficient as possible. At first glance, I would guess the bottleneck in this situation is DNS. Whenever you see a lot of events in the queue like that, they are waiting to be resolved. DNS resolution is considered by BBOT to be passive, so even during a passive scan, it will perform DNS lookups on each subdomain for each record type - A, AAAA, MX, NS, etc. If you consider the extra checks that needs to happen for wildcards, this turns out to be quite a few DNS queries, probably on average about ten or twenty per subdomain. For this reason one of the most important requirements is that you have a good internet connection, fast dns servers configured in your OS, and preferably as many of them as possible. (i.e. in your Here are some questions that if you can answer them will help us narrow down the problem:
|
@TheTechromancer Thanks for the quick response, and also for all the work you've put into bbot! That certainly makes sense with the DNS bottleneck, especially when dealing with a large domain space that has a lot of wildcards.
I can always try adding more resolvers to the system and see if that helps. If you have any more ideas or need more info, let me know. |
Thanks, that helps a lot. Based on that information, I think it's safe to rule out your internet/dns servers as the cause. CPU usage being at 100% during that phase of the scan is definitely abnormal, and probably indicates a bug. I'll be digging deeper into this. BBOT should have no problem scanning a huge domain like comcast.com, so it is high priority to get this fixed. EDIT: In the meantime, if you're able, would you mind running the same scan again with EDIT2: On second thought, don't bother with the debug stuff. I was able to reproduce it on my end. |
Sounds good, I was waiting until another task was finished before trying it again but will hold off. Let me know if you need more input from me. |
Okay, already one interesting finding from this. Did some testing, and google's DNS servers are rate-limiting us: Cloudflare's seem unaffected. So removing 8.8.8.8 and 8.8.4.4 from your There may still be some DNS optimizations we can do within BBOT, which I will look into. But in the meantime, for big domains like comcast.com, it seems to perform best if you decrease the number of threads ( |
Based on my tests it seems like this scan was suffering from a few different small problems that combined together to cause a lot of trouble:
These fixes have been pushed to the feature branch There is one more issue which is a bit more subtle, but I think if we can fix it it will speed this up even more. For my own future reference, this was the result of the cProfile for the above scan:
This line particularly:
Indicates a possible performance issue with BBOT's DNS cache. |
This is great - thanks a lot for your work on this. Damn you, Google! I will try again based on your advice and see how it goes. Seems like it was just the perfect storm of minor problems. |
Hey, I was checking my server and noticed I have 127.0.0.53 too on my /etc/resolv.conf Then I found this: With this command: I got this: There is an explanation here seems related and helpful: I updated my DNS resolvers like this:
And copy pasted these IP: I didn't get the initial warning of bbot which says "I'm only using one dns server and it's better to add more", however, the result of this command: I still got some warnings Now, I tried to add the dns resolvers in to |
I wouldn't recommend using those DNS servers like that. They are good for DNS brute forcing but there isn't much advantage to using them in your OS. Your scan will be much faster and more reliable if you pick one or two extremely fast servers like 1.1.1.1 and 1.0.0.1. EDIT: we are still learning the optimal setup. Soon the warning about having only one DNS server will be probably replaced by a DNS speed test that will check for rate limiting etc. and warn you if your DNS servers are too slow. |
Thanks a lot : ) My previous scan reduced from normal 50 minutes to 34 minutes. And I got once dns failing error. For next scans, I will do as you said : ) |
This issue has been fixed in #1051, which will be merged soon into dev. Thanks to the combined fixes in this branch, the scan now completes in under 1 hour: |
Describe the bug
I'll start by saying I know that large domain spaces will generally take longer to scan and this issue is part question and part feature request.
I've found that when running a subdomain enum on large domain spaces the scan can take forever. This is even after disabling the more aggressive modules like massdns. For example, I ran a subdomain enum scan on comcast.com for a test with the following command:
bbot -t comcast.com -f subdomain-enum --output-dir /root/subdomain_enum/logs --name comcast.com -y -rf passive --config /root/subdomain_enum/bbot_secrets.yml -em ipneighbor,asn,massdns,postman
So far it has been running for 13 hours and still has 37,000 events in queue as seen here:
Expected behavior
I would expect a passive subdomain enumeration scan to take far less time, at least when the massdns module is not enabled. It seems like there's a bottleneck somewhere since the number of events in the queue only goes down in small decrements.
That also leads me to a question: Are there any optimization options that can be used to limit scan times? It would be great to have an option to limit the amount of time a scan and/or module can run. This is especially helpful when running a scan in a non-interactive environment (part of another workflow, cron job, etc.) where you're not actively monitoring its execution. Amass has a similar option for limiting the runtime of a scan. Unfortunately it never actually worked for me which is why I was wanting to switch to bbot but am now running into similar issues.
BBOT Command
bbot -t comcast.com -f subdomain-enum --output-dir /root/subdomain_enum/logs --name comcast.com -y -rf passive --config /root/subdomain_enum/bbot_secrets.yml -em ipneighbor,asn,massdns,postman
OS, BBOT Installation Method + Version
`OS: Ubuntu 20.04, Installation method: Docker, BBOT version: v1.1.5
BBOT Config
The config is the default that comes in the docker image.
Logs
Attached
debug.log
The text was updated successfully, but these errors were encountered: