-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Requirements for DID Method Standardization #10
Comments
A few that come to mind based on discussions with various community members over the last several months:
|
From Steven Capell via the CCG mailing list:
|
I think this is a great list @msporny. The WG's Operating Addendum also contains a few possible (high-level) criteria for the selection process (in section 4.4 "Evaluation Criteria for DID Methods"), which I am copying here:
Others, please also feel free to share your thoughts on selection criteria! |
From Adrian Gropper via the CCG mailing list:
|
@msporny, @peacekeeper this is a great requirements list. Given the trademark issues that have come up lately, I propose that we add a step that will address trademarks as a quick ad hoc check during registration. Reading the other issues / comments, I reflected on the fact that this is an international body and that copyrights, trademarks, & patents are national issues that don't apply internationally. For example, someone could get a copyright in the US, which would apply only within the US and nowhere else. Similarly, a European copyright would be enforceable within Europe, but not enforceable within Asia (unless nations have additional treaties). This leads to the recent dilemma -- should this WG approve a DID Method request in light of potential copyright, trademark, or patent issues? I don't think this is the right place for such a debate, because those rights / limitations are only valid within the country of issuance. For example, if a patent holder receives a US patent, but not a EU patent, then the invention is legally implementable in Europe without patent restrictions. That may cause hard feelings, but that is how the laws are written. So, what should we do? The ICANN Uniform Domain Name Dispute Resolution Policy has some very interesting requirements. Two items stood out to me:
What that means is that ICANN deferred the initial due diligence back to the submitter and instructed any parties claiming infringement to have their claim adjudicated by their governing legal authority. This keeps ICANN out of the debate while processing and complying with official legal rulings. Does that process sound reasonable for our purposes? |
Thanks, Manu. I started a separate issue #12 to focus on the biometrics. |
#10 (comment) is a W3C issue - not a DIF WG issue based on discussions from earlier this week. This topic is being tracked here: w3c/did-extensions#597 |
@manu why are you subsuming other people's contributions into your list? What do you think your role is wrt this WG? Wirt our WG processes we're still at the STRAWMAN stage in terms of process and organization proposals, discussion, and decision making. Feels a bit rogue. |
It seems worth putting the substantial time and energy invested in the DID Rubric (see DID Method Rubric v1.0, latest Editor's Draft) to work in this process. If there are considerations/criteria above which are not yet part of the Rubric, it seems they should be added. It seems likely that the ranked-choice poll described above might be satisfactorily run then, targeting the Rubric's criteria for ranking. Theoretically, a long-ish list of candidate DID Methods could then be assessed, based on the Rubric, which results could then lead to a short list of candidate DID Methods based on how they fit with the ranked-choice poll above... There's nothing easy about this. Much of it is more subjective than not, so applying the Rubric to the candidate DID Methods might require setting up (something like) a spreadsheet into which multiple people could put their Rubric assessments, which could then be averaged to arrive at our "working" (i.e., not carved in stone, not meant to be inherited for use elsewhere, meant only for purposes of this standardization effort) Rubric assessment.... Just some thoughts. |
This is a great discussion! A few additional points I think we should consider to ensure robust and inclusive DID Methods: Governance: Clear frameworks for updates, dispute resolution, and decision-making are essential for trust and longevity. I strongly believe that if we address these points, it will make DIDs more impactful globally. |
@manu At this time, it's presumptuous to assume (let alone suggest it advertise) that there will even be a upcoming ranked choice poll. Please don't do this. |
@manu At this time, it's presumptuous to assume (let alone suggest or advertise) that there will even be a upcoming ranked choice poll. Please don't do this. |
Input from Bryan Newbold of BlueSky on requirements that would be useful from a BlueSky perspective:
|
In order to determine which DID Methods to focus on incubating in 2025, we'll need some sort of selection criteria that has the broadest consensus within the community. This issue is being raised to do some data collection around what that selection criteria should be; that is -- what requirements are important to you when selecting a DID Method that is to become a global standard?
Once we have a list of requirements, we can do a ranked choice poll on all the criteria to see what the community feels is most important to least important. We might have to separate the criteria by DID Method type (ephemeral, web-based, decentralized) because each might have slightly different requirements.
So, what requirements are important to include in this upcoming ranked choice poll among the various communities involved in this work?
The text was updated successfully, but these errors were encountered: