Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: add Power Automate Webhook notification step to /upload-submission process #3047

Merged
merged 4 commits into from
May 1, 2024

Conversation

jessicamcinchak
Copy link
Member

@jessicamcinchak jessicamcinchak commented Apr 23, 2024

Changes:

  • Bumps planx-core throughout to capture type changes to TeamIntegrations - now reads & decrypts both the Power Automate URL and API Key fields
  • Sets up API endpoint which generates schema payload, uploads to S3, and now additionally sends an authorised notification to a Power Automate webhook

Testing:

  • From this pizza, we can test that the send event is successfully created & executed - but these tests will fail on Kev's side because the user uploaded files can't be downloaded (eg pizzas upload to a different bucket)
  • Kev's files API keys only work against staging & production user-data- buckets, so this needs to merge to main to do a true "end to end test" of the whole Planx submission → Power Automate flow

Future PRs (so we can unblock initial testing on main in the meantime!):

  • Audit table (not necessary for very early testing yet)
  • Finalise sanitation strategy (see comment below)

@@ -38,16 +46,35 @@ export async function sendToS3(
const { fileUrl } = await uploadPrivateFile(
exportData,
`${sessionId}.json`,
"barnet-prototype",
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've dropped the static folder name here which means we'll default to using our standard nanoid()-generated random folder name. I think this ultimately has a couple benefits:

  • A bit more secure/un-guessable
  • Existing S3 sanitation job will "just work" for submission JSON files the same as user-uploaded ones

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a heads up here (I know the PR is still draft sorry!) - I think currently the file will be "orphaned" in S3 which we don't want. The S3 sanitation operations loops over expired sessions, and then gets all associated file URLs from the session data by scanning the passport and this file won't be listed there.

One (hopefully fairly simple) solution which still allows us to use a nanoid would be to keep the generated URL in the db somewhere - presumably the submission audit table for this send destination - and then we can match this to the sessionId currently being sanitised and delete it.

A few options here other than the above - happy to talk through if easier 👍

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for clarifying this! An audit table would indeed be an easy solution to this case, but understanding this highlights another loophole for me: files uploaded on any environment from /draft or /preview testing links are also then being currently orphaned/omitted from sanitation because they won't have an associated lowcal_session record, right?

I think we'll need to make a sanitation change sooner than later that looks at S3 date rather than our own session to fix all edge cases? Being careful to omit Planx service image uploads, but those are already in a separate bucket!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, I think a "catch-all" sweep based on age is a good solution (I think it's actually possible to make files "expire" by date when uploading them).

Currently both public and private files are in the same bucket which makes this harder (there are some legacy public files in another bucket).

A good solution may be -

  • public files, in their own bucket, separated by flow

    • easy cleanup when a service is deleted, easy to identify file associated with a flow
  • private files, in their own bucket, grouped by session (even if there's no lowcal_session record there's a session generated in state)

    • These could then all expire based on submission date + retention period, or after a set time interval.

Copy link
Member Author

@jessicamcinchak jessicamcinchak May 1, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds like a plan - I'll add a Trello ticket for this to new requests so we pick it up next sprint / this phase 👍

Will also plan to create an audit table for this work later this week in a small PR & add that to sanitation schedule in short-term.

Copy link

github-actions bot commented Apr 23, 2024

Removed vultr server and associated DNS entries

@jessicamcinchak jessicamcinchak marked this pull request as ready for review April 30, 2024 11:24
@jessicamcinchak jessicamcinchak requested a review from a team April 30, 2024 11:24
Copy link
Contributor

@DafyddLlyr DafyddLlyr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great!

@@ -38,16 +46,35 @@ export async function sendToS3(
const { fileUrl } = await uploadPrivateFile(
exportData,
`${sessionId}.json`,
"barnet-prototype",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, I think a "catch-all" sweep based on age is a good solution (I think it's actually possible to make files "expire" by date when uploading them).

Currently both public and private files are in the same bucket which makes this harder (there are some legacy public files in another bucket).

A good solution may be -

  • public files, in their own bucket, separated by flow

    • easy cleanup when a service is deleted, easy to identify file associated with a flow
  • private files, in their own bucket, grouped by session (even if there's no lowcal_session record there's a session generated in state)

    • These could then all expire based on submission date + retention period, or after a set time interval.

Comment on lines +113 to +116
!["barnet", "lambeth"].includes(teamSlug)
) {
alert(
"AWS S3 uploads are currently being prototyped with Barnet only. Please do not select this option for other councils yet.",
"AWS S3 uploads are currently being prototyped with Barnet and Lambeth only. Please do not select this option for other councils yet.",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@jessicamcinchak jessicamcinchak merged commit 891e788 into main May 1, 2024
12 checks passed
@jessicamcinchak jessicamcinchak deleted the jess/upload-submission-add-webhook branch May 1, 2024 09:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants