-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reviewing Subdomain Registry / Private Request Space #1793
Comments
IMO part of the solution to this problem is automatically removing entries from the private section once they cease to qualify for inclusion (e.g. no SOA record). This would then remove the need for PRs such as #1753, as well as the type of PR referenced by @dnsguru above, freeing up volunteer time. Perhaps a script could regularly run and submit PRs with proposed removals and their rationale? I'm happy to make a first attempt at writing said script -- I'd propose using Python and GitHub Actions assuming that the maintainers are happy with those choices of technology. |
Thanks for the dialog here. This pull request is a "large meal" as it cross-affects others and these layer over each other like acetates. I think personally that NX A/CNAME as a sensor is likely to have false positives and should be avoided. The reason being, namespaces opting to subdomain a given name may opt not to actually resolve the domain itself, and I have seen some domains configured IPv6-only if you could imagine it. Example: kung.foo.tld or tube.bar.tld might be legit subdomains of foo.tld and bar.tld respectively, but there may not be A/CNAME RRs for foo.tld or bar.tld, and that's perfectly allright as long as the _PSL text leaf is present as a TXT record. The SOA on the other hand, that is gonna be present in every existing domain name. A missing SOA would be indicative of a non-existent domain name. Or so I think... I was trying to come up with a legit reason an SOA would not be present on a given namespace that is legitimate, and the only situations I could come up with would be intervention by resolver, as some of the public resolvers perform affectation on queries as a feature. The way to make automation not misbehave about this is to do diverse SOA lookups per domain, and do so against multiple public resolvers (1.1.1.1 8.8.8.8 9.9.9.9). This would mitigate against affectation and also make the automation stronger in handling any latency or connectivity issues specific to where it is operating from. I only have volunteer time to land my helicopter every once in a while and try and advance some PR reviews, which automation sounds delightful for, but it needs some elegance in order to not actually stack the reviewer with MORE cycles. |
I've investigated checking for SOA records -- but it looks like there are a number of prominent domains return some form of error from multiple independent public resolvers. For example:
What about:
|
A TXT record/ |
This "removal if no txt record" concept has been thoroughly discussed in a number of iterations. In a world of one or zero, it is amazing as an idea. While that approach is binary and clean for automation purposes, we have no means to communicate with the admins of 15 years worth of existing entries. The reality is that there is a lot of organic legacy stuff with no TXT records and we need to not disrupt those through removal because they didn't know they needed to do something. So, IF this were done, it would have to be a "from x date forward", which would require a more robust means of tracking. So, we have a legacy entry challenge with automation of removals because there are 15 years worth of entries that would need to be evergreened with no means to evergreen, and zero or negative resources at this time to do any of this. An aside, HSTS is often mistakenly advanced as some utopian evolutionary model to aim for, but it is not a good solution for PSL. Due to the narrow scoping of the HSTS, it works ok for what it is made for, but it is not without gaps and pitfalls. What is different about the PSL is the myriad of diverse use cases that exist. Different segments of those use-cases present narrow solutions from time to time that address the 30-50% of the use cases they or their employer need solved, such as DBOUND or browser-need-only stuff. For those same legacy disruption issues, some form of holistic evolution is the wisest path forward. |
Recently there are a number of voxel.sh subdomain registries that are clogging the PR system. The noted PRs / issues are tied together - applying them in cronological order is needed, which will cause a number of the subsequent PR to require rebasing to proceed or be solved.
Due to volunteer resource constraints, there are delays in processing pull requests for private section entries/updates. In this case, a subdomain registry that had spun up an entrepreneurial model to test out introduced a number of subdomains, and then later ghosted, either selling off or non-renewing a number of the subdomain apex domains within the short span of the processing time of the PSL requests.
So, not only was this disposible labor cycles on behalf of the PSL and downline, it also compounded into a mass of requests.
META: This will have an impact on processing considerations - perhaps a new requirement of a requested namespace showing demonstration of 2-3 years of functional operation of a subdomain space and a certain threshold of distinct, non-spam entries where there are site:foo.bar within google search results as a condition of acceptance criteria for a pull request for foo.bar etc. This is a new conversation, but one that is necessary as a reasonable amount of friction to filter out 'throw at wall' mercenary experimentation namespaces that result in customers being abandoned, may introduce security issues, and most notably... leave debris and cleanup at increased expense of PSL volunteer cycles that could be going to other more beneficial things.
The text was updated successfully, but these errors were encountered: