Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: add Power Automate Webhook notification step to /upload-submission process #3047

Merged
merged 4 commits into from
May 1, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 31 additions & 2 deletions api.planx.uk/modules/send/s3/index.test.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
import supertest from "supertest";
import app from "../../../server";
import { expectedPlanningPermissionPayload } from "../../../tests/mocks/digitalPlanningDataMocks";
import { queryMock } from "../../../tests/graphqlQueryMock";

jest.mock("../../saveAndReturn/service/utils", () => ({
markSessionAsSubmitted: jest.fn(),
Expand All @@ -25,6 +26,34 @@ jest.mock("@opensystemslab/planx-core", () => {
});

describe(`uploading an application to S3`, () => {
beforeEach(() => {
queryMock.mockQuery({
name: "GetStagingIntegrations",
data: {
teams: [
{
integrations: {
powerAutomateWebhookURL: "test.azure.com/whatever",
},
},
],
},
variables: {
slug: "barnet",
},
});

queryMock.mockQuery({
name: "GetStagingIntegrations",
data: {
teams: [],
},
variables: {
slug: "unsupported-team",
},
});
});

it("requires auth", async () => {
await supertest(app)
.post("/upload-submission/barnet")
Expand All @@ -48,10 +77,10 @@ describe(`uploading an application to S3`, () => {
.post("/upload-submission/unsupported-team")
.set({ Authorization: process.env.HASURA_PLANX_API_KEY! })
.send({ payload: { sessionId: "123" } })
.expect(400)
.expect(500)
.then((res) => {
expect(res.body.error).toMatch(
"Send to S3 is not enabled for this local authority (unsupported-team)",
/No team matching "unsupported-team" found/,
);
});
});
Expand Down
35 changes: 31 additions & 4 deletions api.planx.uk/modules/send/s3/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ import type { NextFunction, Request, Response } from "express";
import { $api } from "../../../client";
import { uploadPrivateFile } from "../../file/service/uploadFile";
import { markSessionAsSubmitted } from "../../saveAndReturn/service/utils";
import axios from "axios";

export async function sendToS3(
req: Request,
Expand All @@ -11,6 +12,8 @@ export async function sendToS3(
// `/upload-submission/:localAuthority` is only called via Hasura's scheduled event webhook, so body is wrapped in a "payload" key
const { payload } = req.body;
const localAuthority = req.params.localAuthority;
const env =
process.env.APP_ENVIRONMENT === "production" ? "production" : "staging";

if (!payload?.sessionId) {
return next({
Expand All @@ -22,9 +25,14 @@ export async function sendToS3(
try {
const { sessionId } = payload;

// Only prototyping with Barnet to begin
// In future, confirm this local authority has an S3 bucket/folder configured in team_integrations or similar
if (localAuthority !== "barnet") {
// Fetch integration credentials for this team
const { powerAutomateWebhookURL } = await $api.team.getIntegrations({
slug: localAuthority,
encryptionKey: process.env.ENCRYPTION_KEY!,
env,
});

if (!powerAutomateWebhookURL) {
return next({
status: 400,
message: `Send to S3 is not enabled for this local authority (${localAuthority})`,
Expand All @@ -38,16 +46,35 @@ export async function sendToS3(
const { fileUrl } = await uploadPrivateFile(
exportData,
`${sessionId}.json`,
"barnet-prototype",
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've dropped the static folder name here which means we'll default to using our standard nanoid()-generated random folder name. I think this ultimately has a couple benefits:

  • A bit more secure/un-guessable
  • Existing S3 sanitation job will "just work" for submission JSON files the same as user-uploaded ones

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a heads up here (I know the PR is still draft sorry!) - I think currently the file will be "orphaned" in S3 which we don't want. The S3 sanitation operations loops over expired sessions, and then gets all associated file URLs from the session data by scanning the passport and this file won't be listed there.

One (hopefully fairly simple) solution which still allows us to use a nanoid would be to keep the generated URL in the db somewhere - presumably the submission audit table for this send destination - and then we can match this to the sessionId currently being sanitised and delete it.

A few options here other than the above - happy to talk through if easier 👍

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for clarifying this! An audit table would indeed be an easy solution to this case, but understanding this highlights another loophole for me: files uploaded on any environment from /draft or /preview testing links are also then being currently orphaned/omitted from sanitation because they won't have an associated lowcal_session record, right?

I think we'll need to make a sanitation change sooner than later that looks at S3 date rather than our own session to fix all edge cases? Being careful to omit Planx service image uploads, but those are already in a separate bucket!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, I think a "catch-all" sweep based on age is a good solution (I think it's actually possible to make files "expire" by date when uploading them).

Currently both public and private files are in the same bucket which makes this harder (there are some legacy public files in another bucket).

A good solution may be -

  • public files, in their own bucket, separated by flow

    • easy cleanup when a service is deleted, easy to identify file associated with a flow
  • private files, in their own bucket, grouped by session (even if there's no lowcal_session record there's a session generated in state)

    • These could then all expire based on submission date + retention period, or after a set time interval.

Copy link
Member Author

@jessicamcinchak jessicamcinchak May 1, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds like a plan - I'll add a Trello ticket for this to new requests so we pick it up next sprint / this phase 👍

Will also plan to create an audit table for this work later this week in a small PR & add that to sanitation schedule in short-term.

);

// Send a notification with the file URL to the Power Automate webook
const webhookResponse = await axios({
method: "POST",
url: powerAutomateWebhookURL,
adapter: "http",
headers: {
"Content-Type": "application/json",
},
data: {
message: "New submission from PlanX",
environment: env,
file: fileUrl,
},
}).catch((error) => {
throw new Error(
`Failed to send submission notification to ${localAuthority}'s Power Automate Webhook (${sessionId}): ${error}`,
);
});

// Mark session as submitted so that reminder and expiry emails are not triggered
markSessionAsSubmitted(sessionId);

// TODO Create and update an audit table entry

return res.status(200).send({
message: `Successfully uploaded submission to S3: ${fileUrl}`,
webhookResponse: webhookResponse,
});
} catch (error) {
return next({
Expand Down
2 changes: 1 addition & 1 deletion api.planx.uk/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
"private": true,
"dependencies": {
"@airbrake/node": "^2.1.8",
"@opensystemslab/planx-core": "git+https://github.com/theopensystemslab/planx-core#4e67b2e",
"@opensystemslab/planx-core": "git+https://github.com/theopensystemslab/planx-core#2173d00",
"@types/isomorphic-fetch": "^0.0.36",
"adm-zip": "^0.5.10",
"aws-sdk": "^2.1467.0",
Expand Down
Loading
Loading