Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revising Data Model Release Pipeline #303

Open
ashleythedeveloper opened this issue Feb 13, 2025 · 3 comments
Open

Revising Data Model Release Pipeline #303

ashleythedeveloper opened this issue Feb 13, 2025 · 3 comments
Assignees

Comments

@ashleythedeveloper
Copy link
Collaborator

Hello all,

After several conversations with Michael and Steve, we believe it’s time to revisit our process for releasing updates to the UNTP data models. Our goal is to consolidate the release process in this repository, offering a single source of truth where the reasoning behind changes is documented and collaboration on future updates is encouraged.

Current Process:

  • Data Modeling: Done in Jargon.
  • Snapshot on Save: Captured automatically.
  • Release: Initiated when ready.
  • Automation: GitHub pipeline listens for a release in Jargon and publishes artifacts to https://test.uncefact.org/vocabulary/untp.

I'm starting this conversation to discuss what the best workflow might be moving forward. I look forward to hearing your thoughts and any suggestions you might have.

@ashleythedeveloper ashleythedeveloper self-assigned this Feb 13, 2025
@Fak3
Copy link
Contributor

Fak3 commented Feb 13, 2025

There should be two more final steps

  • make context file human readable
  • calculate context file hash and publish it along the link to the file itself

@ashleythedeveloper
Copy link
Collaborator Author

I want to clarify that we're not planning to replace Jargon. Instead, proposing that the collaboration process and final approval for releases should happen in this repository.

@absoludity
Copy link
Contributor

One tricky part of this is going to be where the source data for the model goes. If it's in Jargon as it is today, then we'll need to do some work to ensure we always export the data cleanly (for the whole model + attributes etc.) on save or something to a GH repo (I'd think). Or if the source of the data is a git repo, we may need to be able to import that into jargon whenever it changes.

Of course investigating the best options here will be part of this ticket, but one thing that probably needs to be held in focus: Even if we need a tight, repo-based version controlled process for the release of the UNTP schemas and ld etc., users of this data are perhaps in a different position and may be less concerned about defining their extension in their own repo (at least initially), and may be happier with a simpler jargon-only process to create extensions. We need to ensure that that is as easy as possible for them, while ensuring that our own process is safely version controlled. If we can do both (ie. define and use a process ourselves which enables easy of use with jargon while still being version-controlled, and people can easily migrate to when they're needing to), even better.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants