-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: add Power Automate Webhook notification step to /upload-submission
process
#3047
Conversation
@@ -38,16 +46,35 @@ export async function sendToS3( | |||
const { fileUrl } = await uploadPrivateFile( | |||
exportData, | |||
`${sessionId}.json`, | |||
"barnet-prototype", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've dropped the static folder name here which means we'll default to using our standard nanoid()
-generated random folder name. I think this ultimately has a couple benefits:
- A bit more secure/un-guessable
- Existing S3 sanitation job will "just work" for submission JSON files the same as user-uploaded ones
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a heads up here (I know the PR is still draft sorry!) - I think currently the file will be "orphaned" in S3 which we don't want. The S3 sanitation operations loops over expired sessions, and then gets all associated file URLs from the session data by scanning the passport and this file won't be listed there.
One (hopefully fairly simple) solution which still allows us to use a nanoid would be to keep the generated URL in the db somewhere - presumably the submission audit table for this send destination - and then we can match this to the sessionId currently being sanitised and delete it.
A few options here other than the above - happy to talk through if easier 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for clarifying this! An audit table would indeed be an easy solution to this case, but understanding this highlights another loophole for me: files uploaded on any environment from /draft
or /preview
testing links are also then being currently orphaned/omitted from sanitation because they won't have an associated lowcal_session
record, right?
I think we'll need to make a sanitation change sooner than later that looks at S3 date rather than our own session to fix all edge cases? Being careful to omit Planx service image uploads, but those are already in a separate bucket!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, I think a "catch-all" sweep based on age is a good solution (I think it's actually possible to make files "expire" by date when uploading them).
Currently both public and private files are in the same bucket which makes this harder (there are some legacy public files in another bucket).
A good solution may be -
-
public files, in their own bucket, separated by flow
- easy cleanup when a service is deleted, easy to identify file associated with a flow
-
private files, in their own bucket, grouped by session (even if there's no
lowcal_session
record there's a session generated in state)- These could then all expire based on submission date + retention period, or after a set time interval.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds like a plan - I'll add a Trello ticket for this to new requests so we pick it up next sprint / this phase 👍
Will also plan to create an audit table for this work later this week in a small PR & add that to sanitation schedule in short-term.
Removed vultr server and associated DNS entries |
…ss/upload-submission-add-webhook
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great!
@@ -38,16 +46,35 @@ export async function sendToS3( | |||
const { fileUrl } = await uploadPrivateFile( | |||
exportData, | |||
`${sessionId}.json`, | |||
"barnet-prototype", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, I think a "catch-all" sweep based on age is a good solution (I think it's actually possible to make files "expire" by date when uploading them).
Currently both public and private files are in the same bucket which makes this harder (there are some legacy public files in another bucket).
A good solution may be -
-
public files, in their own bucket, separated by flow
- easy cleanup when a service is deleted, easy to identify file associated with a flow
-
private files, in their own bucket, grouped by session (even if there's no
lowcal_session
record there's a session generated in state)- These could then all expire based on submission date + retention period, or after a set time interval.
!["barnet", "lambeth"].includes(teamSlug) | ||
) { | ||
alert( | ||
"AWS S3 uploads are currently being prototyped with Barnet only. Please do not select this option for other councils yet.", | ||
"AWS S3 uploads are currently being prototyped with Barnet and Lambeth only. Please do not select this option for other councils yet.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Changes:
TeamIntegrations
- now reads & decrypts both the Power Automate URL and API Key fieldsTesting:
user-data-
buckets, so this needs to merge tomain
to do a true "end to end test" of the whole Planx submission → Power Automate flowFuture PRs (so we can unblock initial testing on
main
in the meantime!):