-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add audit table for data migration scripts #4095
Conversation
🤖 Hasura Change Summary compared a subset of table metadata including permissions: Tracked Tables (1) |
"flow_id" uuid NOT NULL, | ||
"team_id" integer NOT NULL, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would relationships here be helpful for queries in the migration scripts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think so ! As I see it we'll simply return the flow_id
from this table (it's the primary key so indexed) then pass into a GraphQL relational query which will fetch the flow and its' latest published version (so meaningful foreign key is at GraphQL & existing flows
table level rather than on this one?). But please object if I'm missing a benefit of fkeys directly here!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oof slow brain today - answered my own question there - you're totally right, if we set a proper relationship to flows here then migration script will be able to make a single query rather than two 👍 update incoming!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
✅
query MyQuery {
temp_data_migrations_audit(where: {updated: {_eq: false}}, limit: 1) {
flow_id
flows {
data
published_flows(order_by: {created_at: desc}, limit: 1) {
data
}
}
}
}
Removed vultr server and associated DNS entries |
Following on from data migration chat! This is a basic table structure that will help us both: queue up which flows should be updated by a given data migration script and track which have been
updated
so far.Exclusively the
platformAdmin
role can access and update this table.I'm imagining:
where updated = false
until all records have been updatedTRUNCATE
the table and re-use the same structure for our next use-case (plus option to export as CSV like a receipt and store on gdrive before truncation if necessary)