-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CI cloud test #131
base: main
Are you sure you want to change the base?
CI cloud test #131
Conversation
4bda8f2
to
4f53e29
Compare
d444fc7
to
25dc80d
Compare
chore: cloud now generated pre-signed-urls chore: fix signature chore: remove env version chore: remove console.log chore: now we have cloud-backend tests chore: the ng redirecting backend chore: everything except subscript
Bumps [better-sqlite3](https://github.com/WiseLibs/better-sqlite3) from 11.5.0 to 11.6.0. - [Release notes](https://github.com/WiseLibs/better-sqlite3/releases) - [Commits](WiseLibs/better-sqlite3@v11.5.0...v11.6.0) --- updated-dependencies: - dependency-name: better-sqlite3 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]>
Bumps [@fireproof/core](https://github.com/fireproof-storage/fireproof) from 0.19.113 to 0.19.114. - [Commits](fireproof-storage/fireproof@v0.19.113...v0.19.114) --- updated-dependencies: - dependency-name: "@fireproof/core" dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]>
Bumps [@web3-storage/w3up-client](https://github.com/storacha/w3up/tree/HEAD/packages/w3up-client) from 16.5.1 to 16.5.2. - [Release notes](https://github.com/storacha/w3up/releases) - [Changelog](https://github.com/storacha/w3up/blob/main/packages/w3up-client/CHANGELOG.md) - [Commits](https://github.com/storacha/w3up/commits/w3up-client-v16.5.2/packages/w3up-client) --- updated-dependencies: - dependency-name: "@web3-storage/w3up-client" dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]>
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 5.4.11 to 6.0.2. - [Release notes](https://github.com/vitejs/vite/releases) - [Changelog](https://github.com/vitejs/vite/blob/main/packages/vite/CHANGELOG.md) - [Commits](https://github.com/vitejs/vite/commits/v6.0.2/packages/vite) --- updated-dependencies: - dependency-name: vite dependency-type: direct:development update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <[email protected]>
chore: cloud now generated pre-signed-urls chore: fix signature chore: remove env version chore: remove console.log chore: now we have cloud-backend tests chore: the ng redirecting backend chore: everything except subscript
7d16ab4
to
a0e649e
Compare
Bumps [vitest](https://github.com/vitest-dev/vitest/tree/HEAD/packages/vitest) from 2.1.6 to 2.1.8. - [Release notes](https://github.com/vitest-dev/vitest/releases) - [Commits](https://github.com/vitest-dev/vitest/commits/v2.1.8/packages/vitest) --- updated-dependencies: - dependency-name: vitest dependency-type: direct:development update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]>
Bumps [better-sqlite3](https://github.com/WiseLibs/better-sqlite3) from 11.6.0 to 11.7.0. - [Release notes](https://github.com/WiseLibs/better-sqlite3/releases) - [Commits](WiseLibs/better-sqlite3@v11.6.0...v11.7.0) --- updated-dependencies: - dependency-name: better-sqlite3 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]>
Bumps [@typescript-eslint/typescript-estree](https://github.com/typescript-eslint/typescript-eslint/tree/HEAD/packages/typescript-estree) from 8.16.0 to 8.17.0. - [Release notes](https://github.com/typescript-eslint/typescript-eslint/releases) - [Changelog](https://github.com/typescript-eslint/typescript-eslint/blob/main/packages/typescript-estree/CHANGELOG.md) - [Commits](https://github.com/typescript-eslint/typescript-eslint/commits/v8.17.0/packages/typescript-estree) --- updated-dependencies: - dependency-name: "@typescript-eslint/typescript-estree" dependency-type: direct:development update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]>
Bumps [@adviser/cement](https://github.com/mabels/cement) from 0.2.40 to 0.2.41. - [Commits](mabels/cement@v0.2.40...v0.2.41) --- updated-dependencies: - dependency-name: "@adviser/cement" dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]>
Bumps [typescript-eslint](https://github.com/typescript-eslint/typescript-eslint/tree/HEAD/packages/typescript-eslint) from 8.16.0 to 8.17.0. - [Release notes](https://github.com/typescript-eslint/typescript-eslint/releases) - [Changelog](https://github.com/typescript-eslint/typescript-eslint/blob/main/packages/typescript-eslint/CHANGELOG.md) - [Commits](https://github.com/typescript-eslint/typescript-eslint/commits/v8.17.0/packages/typescript-eslint) --- updated-dependencies: - dependency-name: typescript-eslint dependency-type: direct:development update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]>
Bumps [@aws-sdk/client-s3](https://github.com/aws/aws-sdk-js-v3/tree/HEAD/clients/client-s3) from 3.703.0 to 3.705.0. - [Release notes](https://github.com/aws/aws-sdk-js-v3/releases) - [Changelog](https://github.com/aws/aws-sdk-js-v3/blob/main/clients/client-s3/CHANGELOG.md) - [Commits](https://github.com/aws/aws-sdk-js-v3/commits/v3.705.0/clients/client-s3) --- updated-dependencies: - dependency-name: "@aws-sdk/client-s3" dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]>
Bumps [cborg](https://github.com/rvagg/cborg) from 4.2.6 to 4.2.7. - [Release notes](https://github.com/rvagg/cborg/releases) - [Changelog](https://github.com/rvagg/cborg/blob/master/CHANGELOG.md) - [Commits](rvagg/cborg@v4.2.6...v4.2.7) --- updated-dependencies: - dependency-name: cborg dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]>
Bumps [prettier](https://github.com/prettier/prettier) from 3.4.1 to 3.4.2. - [Release notes](https://github.com/prettier/prettier/releases) - [Changelog](https://github.com/prettier/prettier/blob/main/CHANGELOG.md) - [Commits](prettier/prettier@3.4.1...3.4.2) --- updated-dependencies: - dependency-name: prettier dependency-type: direct:development update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]>
Bumps [vite-tsconfig-paths](https://github.com/aleclarson/vite-tsconfig-paths) from 5.1.3 to 5.1.4. - [Release notes](https://github.com/aleclarson/vite-tsconfig-paths/releases) - [Commits](aleclarson/vite-tsconfig-paths@v5.1.3...v5.1.4) --- updated-dependencies: - dependency-name: vite-tsconfig-paths dependency-type: direct:development update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]>
Bumps [wrangler](https://github.com/cloudflare/workers-sdk/tree/HEAD/packages/wrangler) from 3.91.0 to 3.93.0. - [Release notes](https://github.com/cloudflare/workers-sdk/releases) - [Changelog](https://github.com/cloudflare/workers-sdk/blob/main/packages/wrangler/CHANGELOG.md) - [Commits](https://github.com/cloudflare/workers-sdk/commits/[email protected]/packages/wrangler) --- updated-dependencies: - dependency-name: wrangler dependency-type: direct:development update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]>
chore: cloud now generated pre-signed-urls chore: fix signature chore: remove env version chore: remove console.log chore: now we have cloud-backend tests chore: the ng redirecting backend chore: everything except subscript
Bumps [@web3-storage/w3up-client](https://github.com/storacha/w3up/tree/HEAD/packages/w3up-client) from 16.5.1 to 16.5.2. - [Release notes](https://github.com/storacha/w3up/releases) - [Changelog](https://github.com/storacha/w3up/blob/main/packages/w3up-client/CHANGELOG.md) - [Commits](https://github.com/storacha/w3up/commits/w3up-client-v16.5.2/packages/w3up-client) --- updated-dependencies: - dependency-name: "@web3-storage/w3up-client" dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]>
chore: cloud now generated pre-signed-urls chore: fix signature chore: remove env version chore: remove console.log chore: now we have cloud-backend tests chore: the ng redirecting backend chore: everything except subscript
Bumps [@web3-storage/w3up-client](https://github.com/storacha/w3up/tree/HEAD/packages/w3up-client) from 16.5.2 to 17.1.1. - [Release notes](https://github.com/storacha/w3up/releases) - [Changelog](https://github.com/storacha/w3up/blob/main/packages/w3up-client/CHANGELOG.md) - [Commits](https://github.com/storacha/w3up/commits/w3up-client-v17.1.1/packages/w3up-client) --- updated-dependencies: - dependency-name: "@web3-storage/w3up-client" dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <[email protected]>
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 6.0.3 to 6.0.5. - [Release notes](https://github.com/vitejs/vite/releases) - [Changelog](https://github.com/vitejs/vite/blob/main/packages/vite/CHANGELOG.md) - [Commits](https://github.com/vitejs/vite/commits/v6.0.5/packages/vite) --- updated-dependencies: - dependency-name: vite dependency-type: direct:development update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]>
Bumps [esbuild](https://github.com/evanw/esbuild) from 0.24.0 to 0.24.2. - [Release notes](https://github.com/evanw/esbuild/releases) - [Changelog](https://github.com/evanw/esbuild/blob/main/CHANGELOG.md) - [Commits](evanw/esbuild@v0.24.0...v0.24.2) --- updated-dependencies: - dependency-name: esbuild dependency-type: direct:development update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]>
Bumps [@aws-sdk/client-s3](https://github.com/aws/aws-sdk-js-v3/tree/HEAD/clients/client-s3) from 3.712.0 to 3.717.0. - [Release notes](https://github.com/aws/aws-sdk-js-v3/releases) - [Changelog](https://github.com/aws/aws-sdk-js-v3/blob/main/clients/client-s3/CHANGELOG.md) - [Commits](https://github.com/aws/aws-sdk-js-v3/commits/v3.717.0/clients/client-s3) --- updated-dependencies: - dependency-name: "@aws-sdk/client-s3" dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]>
Bumps [@adviser/cement](https://github.com/mabels/cement) from 0.2.41 to 0.2.44. - [Commits](mabels/cement@v0.2.41...v0.2.44) --- updated-dependencies: - dependency-name: "@adviser/cement" dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]>
Bumps [@web3-storage/capabilities](https://github.com/storacha/w3up/tree/HEAD/packages/capabilities) from 18.0.0 to 18.0.1. - [Release notes](https://github.com/storacha/w3up/releases) - [Changelog](https://github.com/storacha/w3up/blob/main/packages/capabilities/CHANGELOG.md) - [Commits](https://github.com/storacha/w3up/commits/access-v18.0.1/packages/capabilities) --- updated-dependencies: - dependency-name: "@web3-storage/capabilities" dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]>
Bumps [node-sqlite3-wasm](https://github.com/tndrle/node-sqlite3-wasm) from 0.8.29 to 0.8.30. - [Release notes](https://github.com/tndrle/node-sqlite3-wasm/releases) - [Commits](tndrle/node-sqlite3-wasm@v0.8.29...v0.8.30) --- updated-dependencies: - dependency-name: node-sqlite3-wasm dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]>
Bumps [typescript-eslint](https://github.com/typescript-eslint/typescript-eslint/tree/HEAD/packages/typescript-eslint) from 8.18.0 to 8.18.1. - [Release notes](https://github.com/typescript-eslint/typescript-eslint/releases) - [Changelog](https://github.com/typescript-eslint/typescript-eslint/blob/main/packages/typescript-eslint/CHANGELOG.md) - [Commits](https://github.com/typescript-eslint/typescript-eslint/commits/v8.18.1/packages/typescript-eslint) --- updated-dependencies: - dependency-name: typescript-eslint dependency-type: direct:development update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]>
chore: cloud now generated pre-signed-urls chore: fix signature chore: remove env version chore: remove console.log chore: now we have cloud-backend tests chore: the ng redirecting backend chore: everything except subscript chore: MsgConnection chore: Msger with connection tests chore: hono is our server platform chore: Get/Put/Delete.Data/WAL chore: refactor cloud test helper chore: added data/wal store tests chore: remove dotenv
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🔭 Outside diff range comments (1)
README.md (1)
Line range hint
1-25
: Add Docker setup instructions for local testingAccording to the PR objectives, the testing framework is transitioning to use local Docker containers. Please add documentation about:
- How to set up and use Docker containers with
test/start-cloud.sh
andtest/start-s3.sh
- Note about using Colima for macOS users
- Instructions for running cloud tests with
pnpm run test --project cloud
Example structure to add:
## Local Docker Setup Before running tests, start the required Docker containers: ```bash # Start cloud testing container bash test/start-cloud.sh # Start S3 testing container bash test/start-s3.shmacOS Users
If you're on macOS, you'll need to use Colima or a similar tool to run Docker containers.
Running Cloud Tests
To run the cloud-specific tests:
pnpm run test --project cloud
<details> <summary>🧰 Tools</summary> <details> <summary>🪛 Markdownlint (0.37.0)</summary> 24-24: null Dollar signs used before commands without showing output (MD014, commands-show-output) </details> </details> </blockquote></details> </blockquote></details> <details> <summary>🧹 Nitpick comments (28)</summary><blockquote> <details> <summary>tests/docker-compose.ts (1)</summary><blockquote> `3-9`: **Ensure robust fallback handling.** Your fallback from `docker-compose` to `docker compose` is cleanly implemented. However, note that if neither `"docker-compose"` nor `"docker compose"` is available, the script may fail silently. Consider informing the user or providing a more descriptive error message when both options are unavailable. ```diff const out = await $`which docker-compose`.catch(() => ({ exitCode: 1 })); const dockerComposePrg = out.exitCode === 0 ? ["docker-compose"] : ["docker", "compose"]; +if (!out.exitCode === 0 && !process.env.DOCKER_INSTALLED) { + console.error("Neither docker-compose nor docker compose is available. Please install Docker."); + process.exit(1); +}
tests/docker-compose.yaml (3)
18-29
: Security consideration: default MinIO credentials.Storing default credentials (
minioadmin
) is typical for local development, but ensure they’re changed for staging or production use. You might also want to parameterize them through environment variables for broader flexibility.
41-54
: Consider removing commented-out code.These lines appear to be a reference for manual steps. If not strictly needed for documentation, removing them can reduce clutter and maintain clarity in version control.
- #docker run -d -p 9000:9000 --name minio \ - # -e "MINIO_ACCESS_KEY=minioadmin" \ - # -e "MINIO_SECRET_KEY=minioadmin" \ - # -v /tmp/data:/data \ - # -v /tmp/config:/root/.minio \ - # minio/minio server /data - #... + # (Optional) If you need manual steps for reference, move them to separate documentation.
66-85
: Health check approach is thorough.Using a dedicated “wait-for-ready” service with a health check ensures all dependencies are online. This pattern is ideal for CI setups but consider whether the 30 retries with 5-second intervals are sufficient for large or slower environments.
examples/ucan/workshop/team/package.json (1)
Line range hint
5-6
: Address test data recognition issueThe PR objectives mention that tests aren't recognizing data writes to the storage backend. While the dependency changes look correct, this might indicate an issue with the Docker container configuration or network setup.
Consider the following potential solutions:
- Verify Docker container networking between the test runner and storage containers
- Check if the storage backend initialization is complete before tests begin
- Ensure proper wait conditions are implemented in
test/start-cloud.sh
andtest/start-s3.sh
Would you like me to help create a diagnostic script to verify the Docker container setup and connectivity?
Also applies to: 10-11, 10-11
src/cloud/meta-merger/tenant-ledger.ts (1)
13-26
: Consider on-delete options
The schema’s foreign key constraint references theTenant
table but does not specifyON DELETE CASCADE
or other constraints. Confirm whether dependent ledger rows should be removed when a tenant is deleted.src/cloud/meta-merger/meta-merger.ts (4)
32-39
: Utility function clarity
ThetoByConnection
function convertsConnection
into aByConnection
object. Consider adding a brief comment clarifying the need for this transformation so future maintainers understand its utility.
64-69
: Schema creation performance
The loop insidecreateSchema()
runs each prepared statement sequentially. That’s typically fine for setup, but if your schema grows, batching these statements (or wrapping them in a transaction) can significantly reduce overhead.
70-99
: Ensure consistent cleanup
InaddMeta
, the calls todeleteByConnection
remove existing entries before the ledger is ensured. If intermediate failures occur, some stale data or partial inserts might remain. Wrapping these steps in a transaction could provide better atomicity.
101-113
: Return type clarity
metaToSend
returns an array ofCRDTEntry
objects. Consider whether you need additional metadata (e.g., time sent) for debugging or analytics. This is mostly a design choice but can help with tracing.src/cloud/meta-merger/meta-send.ts (1)
70-98
: Chunked insert
insert
processes each entry in a loop. If you anticipate large batches, consider a bulk insert technique within a transaction to reduce overhead.src/cloud/meta-merger/meta-by-tenant-ledger.ts (3)
37-55
: Uniqueness constraints
PRIMARY KEY (tenant, ledger, reqId, resId, metaCID)
plusUNIQUE(metaCID)
can be redundant or have side effects if a singlemetaCID
can appear under multiple requests or responses. Confirm this matches your data model.
81-98
: Selective deletion
deleteByConnection
excludes entries only ifmetaCID
is not in the JSON list. This approach is flexible but can become costly ifmetaCIDs
is large. Indexing strategies might be needed if performance lags.
146-157
: Returning typed results
selectByConnection
returns typed data after parsing JSON. Ensure you handle potential JSON parse failures or data corruption, especially if external data sources might insert malformed JSON.src/cloud/meta-merger/sql-border.ts (1)
16-18
: Interface scoping
SqlLite
provides aprepare(sql: string)
method returningSqlLiteStmt
. If future expansions require transaction handling or concurrency checks, you may want to extend the interface.src/cloud/meta-merger/tenant.ts (2)
4-7
: Consider storing creation date as a numeric timestamp.
While storing dates as ISO strings in the database is functional, using numeric timestamps (e.g., UNIX epoch) often simplifies date manipulation, comparisons, and indexing.
33-41
: Validate tenant concurrency.
Though this “insert if not exists” pattern helps ensure a single record per tenant, consider additional locking or a unique constraint if concurrent inserts at scale are expected.src/cloud/meta-merger/meta-merger.test.ts (2)
14-19
: Consider using an in-memory database for tests.
Using an on-disk database ("./dist/test.db"
) may leave behind state between runs. Unless you specifically need persistent data for debugging, an in-memory database (e.g.,":memory:"
) can speed up tests and ensure a clean slate on every test run.
46-66
: Improve test naming for clarity.
The test name"insert one multiple"
is a bit unclear. Consider renaming this to something like"insert single vs multiple and verify metaToSend"
.- it("insert one multiple", async () => { + it("insert single vs multiple and verify metaToSend", async () => {examples/ucan/control-panel/src/state.js (2)
32-96
: Consider removing or reducingconsole.log
usage in production code.
Whileconsole.log(msg.type);
is useful for debugging, it might introduce noise in production logs. Consider using a conditional debug log or a proper logging mechanism with log levels.
258-306
: Security caution: Storing user configs in localStorage.
WhilesaveConfig
is convenient for preserving user settings, storing potentially sensitive info (like emails, delegation data, or tokens) in localStorage may pose security risks in some environments.src/ucan/index.ts (1)
477-527
: Validate clock audience carefully.
Inshare
, there's a check ensuringclock.delegation.audience.did() === args.from.did()
. This is good for correctness but might create confusion if the user tries to share with an email that doesn't match the clock’s DID. Provide a more descriptive error message if needed.examples/ucan/control-panel/src/view.js (2)
209-210
: Use optional chaining to avoid potential runtime errors.
At lines 209-210, accessinginput.value
directly could lead to an error ifinput
is null.- const val = input && input.value.trim(); + const val = input?.value?.trim();🧰 Tools
🪛 Biome (1.9.4)
[error] 209-210: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
[error] 210-210: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
434-435
: Use optional chaining in the share event as well.
Same pattern:const val = input?.value?.trim();
can prevent an error ifinput
is null.- const val = input?.value?.trim();
🧰 Tools
🪛 Biome (1.9.4)
[error] 434-435: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
src/ucan/ucan-gateway.ts (4)
25-25
: Document the new property
Consider adding a brief doc comment describingpolling
and its purpose, so maintainers understand how and why polling is enabled.
53-53
: Simplify the boolean expression
Static analysis flagged this ternary as unnecessary. Directly use the comparison result:-const polling = baseUrl.getParam("poll") === "t" ? true : false; +const polling = baseUrl.getParam("poll") === "t";🧰 Tools
🪛 Biome (1.9.4)
[error] 53-53: Unnecessary use of boolean literals in conditional expression.
Simplify your code by directly assigning the result without using a ternary operator.
If your goal is negation, you may use the logical NOT (!) or double NOT (!!) operator for clearer and concise code.
Check for more details about NOT operator.
Unsafe fix: Remove the conditional expression with(lint/complexity/noUselessTernary)
289-334
: Enhance polling subscription logic
A 10-second poll may be acceptable for basic scenarios, but consider:
• Making the polling interval configurable for flexibility.
• Handling repeated failures fromthis.get(url)
to break out or delay further attempts.
• Tracking active subscriptions to avoid possible memory leaks when multiple subscriptions accumulate but are never unsubscribed.
341-342
: Use optional chaining
You can simplify the null checks by usingif (this.inst?.email) { ... }
. Helps prevent referencingthis.inst
multiple times.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (4)
examples/ucan/control-panel/pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
examples/ucan/workshop/participants/pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
examples/ucan/workshop/team/pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (40)
README.md
(1 hunks)examples/ucan/browser/src/state.js
(0 hunks)examples/ucan/browser/src/types.d.ts
(0 hunks)examples/ucan/browser/src/view.js
(0 hunks)examples/ucan/control-panel/package.json
(1 hunks)examples/ucan/control-panel/src/index.html
(0 hunks)examples/ucan/control-panel/src/state.js
(1 hunks)examples/ucan/control-panel/src/types.d.ts
(1 hunks)examples/ucan/control-panel/src/view.js
(1 hunks)examples/ucan/node/package.json
(1 hunks)examples/ucan/workshop/participants/.gitignore
(0 hunks)examples/ucan/workshop/participants/db.js
(0 hunks)examples/ucan/workshop/participants/delegate.js
(2 hunks)examples/ucan/workshop/participants/package.json
(1 hunks)examples/ucan/workshop/participants/start-here.js
(1 hunks)examples/ucan/workshop/team/README.md
(1 hunks)examples/ucan/workshop/team/package.json
(1 hunks)examples/ucan/workshop/team/src/state.js
(1 hunks)package.json
(5 hunks)src/cloud/backend/wrangler.toml
(1 hunks)src/cloud/client/index.ts
(3 hunks)src/cloud/meta-merger/create-schema-cli.ts
(1 hunks)src/cloud/meta-merger/meta-by-tenant-ledger.ts
(1 hunks)src/cloud/meta-merger/meta-merger.test.ts
(1 hunks)src/cloud/meta-merger/meta-merger.ts
(1 hunks)src/cloud/meta-merger/meta-send.ts
(1 hunks)src/cloud/meta-merger/sql-border.ts
(1 hunks)src/cloud/meta-merger/tenant-ledger.ts
(1 hunks)src/cloud/meta-merger/tenant.ts
(1 hunks)src/ucan/clock/capabilities.ts
(0 hunks)src/ucan/common.ts
(2 hunks)src/ucan/index.ts
(12 hunks)src/ucan/types.ts
(1 hunks)src/ucan/ucan-gateway.ts
(5 hunks)tests/docker-compose.ts
(1 hunks)tests/docker-compose.yaml
(1 hunks)tests/start-netlify.sh
(0 hunks)tests/start-partykit.sh
(0 hunks)tests/start-s3.sh
(0 hunks)tests/start-ucan.sh
(0 hunks)
💤 Files with no reviewable changes (11)
- examples/ucan/workshop/participants/.gitignore
- examples/ucan/control-panel/src/index.html
- tests/start-netlify.sh
- tests/start-s3.sh
- tests/start-ucan.sh
- src/ucan/clock/capabilities.ts
- tests/start-partykit.sh
- examples/ucan/browser/src/types.d.ts
- examples/ucan/workshop/participants/db.js
- examples/ucan/browser/src/view.js
- examples/ucan/browser/src/state.js
✅ Files skipped from review due to trivial changes (3)
- examples/ucan/node/package.json
- examples/ucan/workshop/team/README.md
- src/cloud/meta-merger/create-schema-cli.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- src/cloud/client/index.ts
- package.json
🧰 Additional context used
🪛 Biome (1.9.4)
examples/ucan/control-panel/src/view.js
[error] 209-210: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
[error] 210-210: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
[error] 434-435: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
src/ucan/ucan-gateway.ts
[error] 53-53: Unnecessary use of boolean literals in conditional expression.
Simplify your code by directly assigning the result without using a ternary operator.
If your goal is negation, you may use the logical NOT (!) or double NOT (!!) operator for clearer and concise code.
Check for more details about NOT operator.
Unsafe fix: Remove the conditional expression with
(lint/complexity/noUselessTernary)
[error] 340-340: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
🪛 Gitleaks (8.21.2)
examples/ucan/workshop/participants/start-here.js
20-20: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
🔇 Additional comments (47)
README.md (1)
33-35
: Clarify the getMeta and stream behaviorThe documentation about getMeta and stream operations is too terse. Please provide:
- Why should the subscribe method be avoided?
- What is resGetMeta and its purpose?
- What are the implications of the deliveryCount updates?
examples/ucan/control-panel/src/types.d.ts (3)
1-3
: Imports look consistent.
The imported types are well-organized and relevant for the rest of this file.
7-20
: Clearly defined application State interface.
TheState
type appears comprehensive. Ensure that any optional properties (e.g.,serverInput
) are checked in the reducers or UI logic before usage to avoid null reference errors.
24-42
: Comprehensive message types.
Defining a union of message types is an excellent approach to enforce type safety. Be sure that all message dispatch scenarios are tested to confirm no unhandled message types cause runtime errors.examples/ucan/workshop/participants/delegate.js (1)
23-23
: Confirm that the updated delegation string is valid.
The new base64-encoded delegation string could fail if incorrectly copied or truncated. Consider validating or logging errors if the string fails to decode properly.src/cloud/backend/wrangler.toml (3)
1-3
: Configuration basics appear valid.
Thename
,main
, andcompatibility_date
fields are properly specified for this Wrangler configuration.
7-10
: Durable objects and migrations look good.
The definitions forFP_META_GROUPS
and the migration entry are well-structured. Ensure that theFPMetaGroups
class is defined and exported correctly for production.Also applies to: 12-15, 18-20, 22-24
26-35
: Revisit hardcoded credentials in environment variables.
Storing secrets likeACCESS_KEY_ID
andSECRET_ACCESS_KEY
in plaintext poses security risks. Use environment variables or a secure vault outside version control.Also applies to: 36-39, 41-50, 51-54
tests/docker-compose.ts (2)
1-2
: Imports look good.Importing the
zx
library is straightforward and organized. No issues found.
11-15
: Error handling is appropriate.Logging the error and exiting with a non-zero code is aligned with typical CLI scripts. This ensures that calling CI or scripts can detect failures.
tests/docker-compose.yaml (4)
1-9
: Service definition for Netlify is well-structured.The context and dockerfile references are clear, and the port binding approach is straightforward for local development.
10-17
: Partykit container specification is consistent.Similar to Netlify, your approach is clean and easily extensible for future use cases.
30-40
: Create Bucket step is well-defined.The AWS CLI container usage is clear, and the
depends_on
ensures MinIO is running first. This is a solid local development pattern.
55-65
: UCAN service definition is succinct.The image build is straightforward, and the port mapping is clear. It should integrate smoothly with the other services.
examples/ucan/control-panel/package.json (1)
10-11
: Verify removal of Cloudflare connection handlingThe removal of
@fireproof/connect
and addition of local dependencies aligns with moving to Docker-based testing. However, we should verify that the Docker setup properly replaces the Cloudflare functionality.Let's check for any remaining Cloudflare dependencies or references:
✅ Verification successful
Remaining Cloudflare dependencies need to be addressed
The verification reveals several remaining Cloudflare-related dependencies and references that need attention:
- Test configuration files still import Cloudflare worker pools:
vitest.cloud.config.ts
vitest.cf-kv.config.ts
- Cloud backend code contains Cloudflare-specific implementations:
src/cloud/backend/cf-hono-server.ts
src/cloud/backend/server.ts
- UCAN examples still use
@fireproof/connect
:
examples/ucan/node/delegate.js
examples/ucan/node/script.js
Let's verify if these are actively used or just remnants:
Let's check one more aspect to understand if these files are actually used in the current build:
Cloudflare dependencies can be safely removed
Based on the verification results:
- The Cloudflare-related files in
src/cloud/backend/
are not actively referenced in any build configurations or entry points- The Vitest Cloudflare configs (
vitest.cloud.config.ts
andvitest.cf-kv.config.ts
) appear to be unused legacy files- The UCAN examples using
@fireproof/connect
are not referenced anywhere in the active codebaseThe removal of
@fireproof/connect
frompackage.json
is correct, and the remaining Cloudflare-related files appear to be unused artifacts that could be cleaned up in a future PR. The Docker-based testing setup (as seen instart-cloud.sh
) has properly replaced the Cloudflare functionality.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for Cloudflare-related imports or configurations rg -i "cloudflare|@fireproof/connect" --type js --type ts # Check if test scripts reference Cloudflare fd "start-cloud.sh|start-s3.sh" -x cat {} \;Length of output: 1909
Script:
#!/bin/bash # Check if Cloudflare test configs are referenced in package.json scripts rg -l "vitest\.(cloud|cf-kv)\.config\.ts" package.json # Check imports of cloud backend files rg -l "cf-hono-server|cloudflare:workers" --type js --type ts # Check if the UCAN examples are referenced in any test or documentation rg -l "examples/ucan/node/(delegate|script)\.js"Length of output: 199
Script:
#!/bin/bash # Check if the Cloudflare-related files are referenced in any build configurations or main entry points rg -l "src/cloud/backend/(server|cf-hono-server)" package.json tsconfig*.json vite*.config.* webpack*.config.* # Check if the vitest configs are actually used fd "vitest\.(cloud|cf-kv)\.config\.ts" -x test -f {} \; # Look for any active imports of cloud backend ast-grep --pattern 'import $_ from "$_/cloud/backend/$_"'Length of output: 603
examples/ucan/workshop/team/package.json (1)
10-11
: Verify UCAN functionality in Docker environmentThe package maintains UCAN-related dependencies (
@ucanto/core
) while transitioning to local testing. This suggests the Docker setup needs to handle UCAN authentication/authorization.Let's verify the Docker setup includes necessary UCAN configurations:
✅ Verification successful
Docker setup correctly handles UCAN functionality
The Docker environment is properly configured to support UCAN functionality:
- A dedicated UCAN service is defined in
docker-compose.yaml
using theDockerfile.fp-ucan
- The UCAN service exposes port 8787 and is included in the health checks
- The Dockerfile correctly sets up the UCAN environment by:
- Cloning the fireproof-ucan repository
- Installing dependencies
- Running the development server
- The setup is integrated with other services (Netlify, PartyKit, MinIO) in a coordinated way
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check Docker-related files for UCAN configuration fd "Dockerfile|docker-compose|\.sh$" -x grep -l "ucan" {} \; # Check for UCAN-related environment variables in Docker setup fd "start-cloud.sh|start-s3.sh" -x grep -i "ucan" {} \;Length of output: 216
Script:
#!/bin/bash # Check contents of identified Docker-related files echo "=== Contents of tests/Dockerfile.fp-ucan ===" cat tests/Dockerfile.fp-ucan echo -e "\n=== Contents of tests/docker-compose.yaml ===" cat tests/docker-compose.yaml echo -e "\n=== Contents of smoke/ucan/it.sh ===" cat smoke/ucan/it.sh echo -e "\n=== Contents of tests/localrun.sh ===" cat tests/localrun.sh # Also check for any UCAN-related environment files echo -e "\n=== Searching for UCAN environment files ===" fd "\.env" -x grep -l "ucan" {} \;Length of output: 3867
src/cloud/meta-merger/tenant-ledger.ts (2)
6-10
: Interface type consistency check
ThecreatedAt
field is aDate
, while the database schema stores it as aTEXT
column. This is acceptable if always converted to an ISO string; just ensure all consumers know it’s a string in storage.
42-51
: INSERT ... SELECT pattern
Using an INSERT with aWHERE NOT EXISTS
clause ensures you only insert a record if it doesn’t already exist. This is a good strategy, but be aware the primary key also enforces uniqueness. Double-check if theNOT EXISTS
logic duplicates the PK constraint or is necessary to avoid throwing an error in certain flows.src/cloud/meta-merger/meta-merger.ts (1)
19-23
: Interfaces are well-structured
Defining separate interfaces forMetaMerge
andByConnection
clarifies the shapes of metadata objects versus connection contexts. This separation aids readability and robust usage.src/cloud/meta-merger/meta-send.ts (4)
7-12
: Date storage
TheMetaSendRow
interface storessendAt
as aDate
, saved in the DB as a TEXT column. Keep the date string format consistent (e.g., ISO) to ensure sorting behaviors remain predictable if needed.
18-32
: Schema dependencies
You’re extendingMetaByTenantLedgerSql.schema()
withinMetaSendSql.schema()
. Ensure that ifMetaByTenantLedger
creation fails or is partially created, you handle that gracefully before continuing to createMetaSend
.
46-53
: Single-statement preparation
sqlInsertMetaSend
is prepared only once, which is efficient. Just confirm you’re aware that reusing the same statement object sequentially is a recommended pattern here, and that concurrency won’t be an issue if run in parallel.
55-68
: Selecting unsent metadata
sqlSelectToAddSend
ensures metadata is selected fromMetaByTenantLedger
only if it’s not already inMetaSend
. This logic is correct for deduplicating sends. Confirm that partial sends or resends are handled gracefully ifreqId
orresId
are reused.src/cloud/meta-merger/meta-by-tenant-ledger.ts (2)
7-15
: Schema alignment
MetaByTenantLedgerRow
referencesupdateAt
while SQL column is namedupdatedAt
. The code does handle that mismatch by converting to/from JSON. Just be certain all references remain consistent across the codebase.
129-132
: Error-handling strategy
Inensure
, an insertion error (e.g., violating unique constraints) might break the flow. If partial success is critical, consider try/catch or logging to maintain resiliency.src/cloud/meta-merger/sql-border.ts (2)
3-5
: Date generation
The low-levelnow()
function centralizes date creation as an ISO string. If you need timezones or custom formatting, keep a consistent approach across the codebase.
7-14
: Asynchronous patterns
SqlLiteStmt
depictsrun
andget
asPromise
-returning methods, butbetter-sqlite3
typically performs synchronous calls. Ensure the bridging logic is tested thoroughly so that concurrency and error handling remains robust.src/cloud/meta-merger/tenant.ts (1)
26-31
: Check concurrency edge cases.
Multiple simultaneous calls tosqlCreateTenant()
should be safe, but ensure that you handle potential concurrency issues in your broader application flow if multiple threads or processes attempt to create tenants concurrently.src/cloud/meta-merger/meta-merger.test.ts (3)
36-44
: Test scenario validations look good.
The “insert nothing” test accurately verifies that no metadata means no returned rows, ensuring proper base case handling inmetaToSend()
.
68-85
: Well-structured test for multiple inserts.
This test thoroughly checks that subsequent calls tometaToSend()
return the inserted CRDT entries in sorted order, ensuring stable behavior.
87-119
: Robust multi-connection test coverage.
This test ensures that multiple connections inserting and retrieving metadata see a consistent, merged view—great for verifying concurrency correctness in your code.examples/ucan/control-panel/src/state.js (5)
1-23
: Imports and typedef look good.
The mix of JSDoc typedefs and ES modules is clearly structured, improving readability and type clarity.
102-135
: Side-effect handling is well-organized.
Thefx
function uses a clear switch statement to break down the side effects. Actions are easy to trace, and the approach follows a consistent pattern that aligns with the rest of the codebase.
137-185
: Verify error handling in asynchronous effects.
Asynchronous functions such asclaimAllShares
andconnect
throw errors without a top-leveltry/catch
. Ensure there's a global error boundary or a strategy to catch and log or display these errors to the user.
187-254
: Modular effect implementations are coherent.
Functions likedetermineAgent
,determineClock
,determineServer
,fetchDbContents
, andlogin
are properly separated, improving maintainability and readability.
311-351
: Initialization logic is straightforward.
Building the initial state from localStorage, then hooking into the store for reactivity is a clean design pattern, making the application state transparent and predictable.src/ucan/index.ts (2)
42-42
:poll
parameter addition.
The newly introducedpoll
field inConnectionParams
broadens the connection’s behavior. Make sure any existing code callingconnect
sets or defaults this field properly to avoid unexpected polling.
441-475
:claimShares
function is well-structured.
It cleanly fetches all share delegations in one call. The aggregated approach likely reduces complexity and overhead compared to multiple single-share claims.src/ucan/types.ts (1)
86-86
: Potentially breaking change by returningUnit
instead ofClockAuthorizeShareSuccess
.
Users expecting a shippable success object will now receive aUnit
. Verify if any downstream logic depends on the old shape.examples/ucan/workshop/participants/start-here.js (1)
20-20
: Potential false positive from Gitleaks (Generic API key).
The DID string is not a service credential. This looks like a standard DID rather than an API key. You can safely ignore this warning.🧰 Tools
🪛 Gitleaks (8.21.2)
20-20: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
src/ucan/ucan-gateway.ts (1)
14-14
: Import usage
The introduction ofagentProofs
andextractDelegation
fromcommon.js
is valid, ensuring a shared utility from one location.examples/ucan/workshop/team/src/state.js (1)
53-53
: Validate polling behavior
Confirm that the local Docker environment or other setups can handle this newly enabled polling. If necessary, parametrize the polling option to allow toggling for different environments or tests.src/ucan/common.ts (5)
3-3
: Type import
Bringing in definitions forAgent
,AgentDataExport
, andDelegationMeta
fortifies type safety.
6-7
: New import
Importingto_arraybuf
allows uniform conversion to ArrayBuffer, reducing potential type mismatches.
8-9
: Import service type
Importing theService
type unifies and clarifies the agent's service-contract references.
10-32
: Robust proof extraction
The function meaningfully segregates delegations and attestations. As a safeguard, consider defensive checks forcapabilities[0]
in case an unexpected structure lacks capabilities.
62-62
: Switch to ArrayBuffer
Usingto_arraybuf(b.bytes)
ensures a standardized serialization. Verify that downstream consumers handleArrayBuffer
rather thanUint8Array
.
Cloud Meta Merge Datastructure: | ||
|
||
1. PK(reqId,resId,tenant,ledger) accessed(date) (delete after x of time) | ||
2. PK(tenant,ledger,reqId,resId) meta deliveryCount (delete if deiveryCount > y) | ||
if meta is updated deliveryCount = 0 | ||
|
||
getMeta updates deliveryCount | ||
getMeta on stream starts updates stream of resGetMeta | ||
avoid subscribe method |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve formatting and clarity of Cloud Meta Merge documentation
The Cloud Meta Merge documentation needs better formatting and clarity:
- There's a typo: "deiveryCount" should be "deliveryCount"
- The structure is hard to read and understand
Consider this improved format:
## Cloud Meta Merge Data Structure
### Primary Key Formats
1. **Time-based Access Record**
PK(reqId, resId, tenant, ledger)
- Includes: accessed(date)
- Cleanup: Entries deleted after time period X
2. **Delivery Tracking Record**
PK(tenant, ledger, reqId, resId)
- Includes: meta deliveryCount
- Cleanup: Entries deleted if deliveryCount > Y
- Note: deliveryCount resets to 0 when meta is updated
### Operations
- **getMeta**:
- Updates the deliveryCount
- When used with stream, initiates updates stream of resGetMeta
- ⚠️ Warning: Avoid using the subscribe method
// await db.close(); | ||
// process.exit(0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider proper resource cleanup.
Commenting out db.close()
and process.exit(0)
may lead to orphaned resources or processes if this script is run repeatedly. Validate whether graceful shutdown is needed to avoid potential resource leaks.
"@fireproof/core": "0.19.116", | ||
"@fireproof/ucan": "../../../../dist/ucan", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Version inconsistency detected in @fireproof/core dependencies
There's an inconsistency in the @fireproof/core versions across package.json files:
- One file uses
^0.19.119
- Multiple files use
0.19.116
Additionally, the dist/ucan
directory referenced in multiple package.json files doesn't appear to exist in the repository, which could cause issues during local development and testing.
Recommendations:
- Align all @fireproof/core versions to use the same version (either 0.19.116 or ^0.19.119)
- Ensure the dist/ucan directory is properly built before running local tests
- Consider using a more robust local development setup (e.g., yarn/npm workspace or relative path to source instead of dist)
🔗 Analysis chain
Verify dependency setup for local Docker testing
The changes align with the PR's objective to transition to local testing:
- Using local path
../../../../dist/ucan
ensures the Docker containers use the local build - The specific version of
@fireproof/core
(0.19.116) is consistent across all package files
Let's verify the dist/ucan directory setup:
The previous command didn't find the dist/ucan directory. Let's verify the directory structure and check version consistency across package.json files to ensure proper local development setup.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if dist/ucan exists and contains necessary files
fd -t d "dist/ucan$" -x ls -la {} \;
Length of output: 41
Script:
#!/bin/bash
# Let's check if the dist directory exists and its structure
fd -t d "^dist$" -x ls -la {} \;
# Let's also check for any package.json files to verify version consistency
fd "package.json$" -x cat {} \; | grep -A 1 "@fireproof/core\|@fireproof/ucan"
Length of output: 599
async ensure(t: TenantLedgerRow) { | ||
await this.tenantSql.ensure(t); | ||
const stmt = this.sqlEnsureTenantLedger(); | ||
return stmt.run(t.tenant, t.ledger, t.createdAt.toISOString(), t.tenant, t.ledger); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Error handling consideration
In the ensure
method, if tenantSql.ensure(t)
fails or returns an unexpected result, we might silently pass the error. Consider capturing or logging exceptions from both ensures, to help with debugging if the DB insertion fails.
src/cloud/meta-merger/meta-send.ts
Outdated
readonly #sqlDeleteByConnection = new ResolveOnce<Statement>(); | ||
sqlDeleteByMetaCID(): Statement<[string, string, string, string, string], void> { | ||
return this.#sqlDeleteByConnection.once(() => { | ||
return this.db.prepare(` | ||
DELETE FROM MetaSend | ||
WHERE metaCID in (SELECT metaCID FROM MetaByTenantLedger | ||
WHERE | ||
tenant = ? | ||
AND | ||
ledger = ? | ||
AND | ||
reqId = ? | ||
AND | ||
resId = ? | ||
AND | ||
metaCID NOT IN (SELECT value FROM json_each(?))) | ||
`); | ||
}); | ||
} | ||
|
||
async deleteByConnection(dmi: ByConnection & { metaCIDs: string[] }) { | ||
const stmt = this.sqlDeleteByMetaCID(); | ||
return stmt.run(dmi.tenant, dmi.ledger, dmi.reqId, dmi.resId, JSON.stringify(dmi.metaCIDs)); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Deletion filtering
deleteByConnection
uses JSON-based filtering in the subselect. While flexible, ensure this does not become a performance bottleneck for large sets of metaCIDs.
src/cloud/meta-merger/tenant.ts
Outdated
async ensure(t: TenantRow) { | ||
const stmt = this.sqlEnsureTenant(); | ||
return stmt.run(t.tenant, t.createdAt.toISOString(), t.tenant); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ensure robust error handling.
If the .run()
call fails—for instance, due to database locks or invalid data inputs—no error handling or retry logic is in place. Consider adding error handling or retries here if reliability is a top priority.
// SHARING | ||
// ------- | ||
|
||
/** | ||
* Claim a specific share. | ||
* | ||
* ⚠️ Polls until the share is confirmed and the delegation is available on the server. | ||
* | ||
* @param context | ||
* @param context.email The (principal) email of the receiver of the share. | ||
* @param context.cid The CID communicated by the sharer. | ||
*/ | ||
export async function claimShare( | ||
args: { email: Principal<DidMailto>; cid: string }, | ||
context?: { | ||
agent?: AgentWithStoreName; | ||
server?: Server; | ||
} | ||
) { | ||
const ctx = { | ||
agent: context?.agent || (await agent({ server: context?.server })), | ||
server: context?.server || (await server()), | ||
}; | ||
|
||
const proofs = agentProofs(ctx.agent.agent); | ||
const attestation = proofs.attestations[0]; | ||
const delegation = proofs.delegations[0]; | ||
|
||
const claim = async () => { | ||
const resp = await ClockCaps.claimShare | ||
.invoke({ | ||
issuer: ctx.agent.agent.issuer, | ||
audience: ctx.server.id, | ||
with: ctx.agent.id.did(), | ||
proofs: [attestation, delegation], | ||
nb: { recipient: args.email.did(), proof: parseLink(args.cid) }, | ||
}) | ||
.execute(ctx.agent.agent.connection); | ||
|
||
if (resp.out.error) throw resp.out.error; | ||
return Object.values(resp.out.ok.delegations).flatMap((proof) => bytesToDelegations(proof)); | ||
}; | ||
|
||
const poll = async () => { | ||
const proofs = await claim(); | ||
const attestation = proofs.find((p) => p.capabilities[0].can === "ucan/attest"); | ||
|
||
if (!attestation) { | ||
await new Promise((resolve) => { | ||
setTimeout(resolve, 2500); | ||
}); | ||
|
||
return await poll(); | ||
} | ||
|
||
return proofs; | ||
}; | ||
|
||
return await poll(); | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Recursive polling logic may need a timeout or iteration limit.
claimShare
uses a recursive poll()
function that retries indefinitely until a share is found. Consider providing a maximum retry count or a timeout to avoid potential infinite loops.
+ let attempts = 0;
const poll = async () => {
+ attempts++;
+ if (attempts > 20) {
+ throw new Error("Max polling attempts reached.");
+ }
const proofs = await claim();
...
};
Committable suggestion skipped: line range outside the PR's diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Nitpick comments (11)
src/sql/v0.19/sqlite_factory.ts (1)
59-59
: Inject filesystem references for better testability.
While callinggetFileSystem(URI.from("file:///"))
directly here is functional, consider injecting the filesystem reference via constructor parameters or a higher-level factory. This approach can simplify mocking and error handling in tests, especially if you want to simulate different filesystems (e.g., Docker-based or remote).package.json (1)
31-31
: Consider enhancing Docker test scripts with health checks and error handlingThe Docker-compose scripts look good for local testing, but consider these improvements for robustness:
- Add health checks to ensure services are truly ready
- Add timeout configurations
- Include error handling for container startup failures
Example enhancement for
pretest-docker
:-"pretest-docker": "tsx tests/docker-compose.ts -f tests/docker-compose.yaml up -d --wait --build", +"pretest-docker": "tsx tests/docker-compose.ts -f tests/docker-compose.yaml up -d --wait --build --timeout 60 && tsx tests/health-check.ts",Also applies to: 33-33
src/cloud/meta-merger/meta-by-tenant-ledger.ts (2)
7-15
: Clarify naming of theupdateAt
field.The interface uses
updateAt
, but your table schema referencesupdatedAt
. This naming inconsistency can lead to confusion or errors when referencing the property.
133-136
: Handle concurrency or conflicts on insert.If an entry already exists in the table, calling
ensure()
may cause an error due to primary key or unique constraints. To improve reliability, consider wrapping insert operations in transactions or implementing an “upsert” approach.src/cloud/meta-merger/cf-worker-abstract-sql.ts (2)
5-18
: Ensure error handling for run().
run()
can fail if the query or binding parameters are invalid. Currently, no catch or fallback is performed. Consider wrapping the call in a try/catch block or rejecting with clearer context when an error occurs.
20-29
: Check for concurrency.
CFWorkerSQLDatabase
opens and prepares statements without explicit concurrency control. In multi-tenant or high-traffic scenarios, note that D1 might handle concurrency behind the scenes, but consider clarifying usage with transactions if needed.src/cloud/meta-merger/bettersql-abstract-sql.ts (1)
19-32
: Accept or override default SQLite settings.
better-sqlite3
may benefit from being configured with pragmas for handling journaling or synchronization modes. Evaluate whether the defaults (e.g., WAL) are suitable for your environment’s performance and durability needs.src/cloud/meta-merger/abstract-sql.ts (1)
26-28
: Encapsulate database connection details within a factory or higher-level module.
Currently, theSQLDatabase
interface only exposes aprepare
method. Consider adding methods or a helper factory to manage pooled connections or handle reconnection logic, especially for production workloads.src/cloud/meta-merger/tenant-ledger.ts (1)
19-26
: Consider adding indexes for performance optimizationSince this table will be frequently queried in the local Docker environment, consider adding indexes on commonly queried columns to improve performance.
Add the following indexes after the table creation:
CREATE INDEX IF NOT EXISTS idx_tenant_ledger_tenant ON TenantLedger(tenant); CREATE INDEX IF NOT EXISTS idx_tenant_ledger_ledger ON TenantLedger(ledger);src/cloud/meta-merger/meta-merger.ts (1)
70-99
: Batch insert optimization neededThe current implementation performs individual inserts in a loop, which might be inefficient in a local Docker environment.
Consider using batch inserts to improve performance:
async addMeta(mm: MetaMerge) { if (!mm.metas.length) { return; } const now = mm.now || new Date(); const byConnection = toByConnection(mm.connection); const connCIDs = { ...byConnection, metaCIDs: mm.metas.map((meta) => meta.cid), }; await this.sql.metaSend.deleteByConnection(connCIDs); await this.sql.metaByTenantLedger.deleteByConnection(connCIDs); await this.sql.tenantLedger.ensure({ tenant: mm.connection.key.tenant, ledger: mm.connection.key.ledger, createdAt: now, }); + const batchSize = 100; + for (let i = 0; i < mm.metas.length; i += batchSize) { + const batch = mm.metas.slice(i, i + batchSize); try { - await this.sql.metaByTenantLedger.ensure({ - ...byConnection, - metaCID: meta.cid, - meta: meta, - updateAt: now, - }); + await Promise.all( + batch.map(meta => + this.sql.metaByTenantLedger.ensure({ + ...byConnection, + metaCID: meta.cid, + meta: meta, + updateAt: now, + }) + ) + ); } catch (e) { - this.sthis.logger.Warn().Err(e).Str("metaCID", meta.cid).Msg("addMeta"); + this.sthis.logger.Warn().Err(e).Msg(`Failed to add batch starting at index ${i}`); } } }src/cloud/meta-merger/meta-send.ts (1)
63-70
: Optimize query performance with proper indexingThe
sqlSelectToAddSend
query might be slow in a local Docker environment without proper indexing.Add the following indexes to improve query performance:
CREATE INDEX IF NOT EXISTS idx_meta_send_composite ON MetaSend(metaCID, reqId, resId);
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (18)
package.json
(5 hunks)src/cloud/backend/env.d.ts
(1 hunks)src/cloud/backend/wrangler.toml
(1 hunks)src/cloud/meta-merger/abstract-sql.ts
(1 hunks)src/cloud/meta-merger/bettersql-abstract-sql.ts
(1 hunks)src/cloud/meta-merger/cf-worker-abstract-sql.ts
(1 hunks)src/cloud/meta-merger/create-schema-cli.ts
(1 hunks)src/cloud/meta-merger/meta-by-tenant-ledger.ts
(1 hunks)src/cloud/meta-merger/meta-merger.test.ts
(1 hunks)src/cloud/meta-merger/meta-merger.ts
(1 hunks)src/cloud/meta-merger/meta-send.ts
(1 hunks)src/cloud/meta-merger/tenant-ledger.ts
(1 hunks)src/cloud/meta-merger/tenant.ts
(1 hunks)src/sql/sqlite.test.ts
(2 hunks)src/sql/v0.19/sqlite_factory.ts
(2 hunks)tsconfig.json
(3 hunks)vitest.cf-worker.config.ts
(1 hunks)vitest.workspace.ts
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- src/cloud/meta-merger/create-schema-cli.ts
- src/cloud/meta-merger/meta-merger.test.ts
- tsconfig.json
🔇 Additional comments (17)
src/sql/sqlite.test.ts (1)
1-2
: Looks consistent with the updated file system access approach.
Translating the reference fromrt
togetFileSystem
aligns with the new pattern across the codebase, ensuring consistency.package.json (3)
54-79
: Verify compatibility with updated dev dependenciesThe updates look good, but since multiple testing-related packages have been updated (
vitest
,@typescript-eslint/*
, etc.), ensure that:
- All test suites pass with the new versions
- No deprecation warnings are present
- The new WebSocket dependency (
[email protected]
) works correctly with the local Docker setupRun this script to check for any deprecation warnings or compatibility issues:
✅ Verification successful
Dependencies are correctly integrated and no compatibility issues found
The verification results show:
- No warnings or errors from package manager (
pnpm ls
)- The
ws
package (v8.18.0) is correctly integrated as a dependency ofminiflare
, which is used by bothwrangler
and@cloudflare/vitest-pool-workers
- All TypeScript-related packages (
@typescript-eslint/*
) are properly linked and have consistent versions (8.18.1)vitest
is correctly integrated with its peer dependencies🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check package compatibility and deprecation warnings pnpm ls --depth=0 2>&1 | grep -i "WARN\|ERR" # Verify test-related packages pnpm why vitest @typescript-eslint/typescript-estree wsLength of output: 1141
90-92
: Verify the necessity of @adviser/cement as both peer and regular dependencyThe package appears as both a peer dependency and a regular dependency with the same version (^0.3.2). This might lead to:
- Potential circular dependencies
- Version conflicts with other packages
Run this script to check for potential issues:
Also applies to: 94-94
✅ Verification successful
@adviser/cement should be kept as both regular and peer dependency
The current setup is correct and necessary because:
- The package is heavily used throughout the codebase (60+ files import it)
- It's a direct dependency needed for the package to work (as shown by
pnpm why
)- It's also correctly declared as a peer dependency to allow consumers to provide their own compatible version
- No other packages in the workspace declare it as a peer dependency, reducing version conflict risks
This is a common pattern for packages that are both used internally and expected to be provided by the consuming application.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check for duplicate installations and version conflicts pnpm why @adviser/cement # Check if any other packages depend on @adviser/cement pnpm ls --depth=Infinity | grep -B2 -A2 "@adviser/cement"Length of output: 389
Script:
#!/bin/bash # Check package.json for dependencies and peer dependencies cat package.json | jq '.dependencies["@adviser/cement"], .peerDependencies["@adviser/cement"]' # Search for usage of @adviser/cement in the codebase rg -l "@adviser/cement" --type ts --type js # Check if there are any other packages that list @adviser/cement as peer dependency fd package.json --type f --exec sh -c 'echo "=== {} ===" && cat {} | jq -r ".peerDependencies | select(.\"@adviser/cement\" != null)"'Length of output: 2517
96-101
: Review security implications and necessity of new dependenciesSeveral new packages have been added to support the transition to local Docker testing. Please verify:
- Security implications of new packages, especially those handling server functionality (@hono/*, partyserver)
- Necessity of all new packages to avoid bloat
- Potential overlap in functionality between packages
Run these checks:
Also applies to: 118-121, 123-123, 129-130, 132-135
✅ Verification successful
Let me gather more information about the usage of these packages in the codebase.
Let me try one more search with corrected regex to find the usage of these packages.
Based on the analysis of the codebase, I can now provide a final response regarding the security and necessity of the new dependencies.
Dependencies are necessary and security concerns are minimal
The codebase shows that:
@hono/node-server
and@hono/node-ws
are actively used for WebSocket server implementations insrc/cloud/msger.ts
andsrc/cloud/hono-server.js
- The Cloudflare-related packages are used in tests and core functionality
- There's only one moderate security vulnerability in a transitive dependency (nanoid < 3.3.8) which is not directly related to the new packages
@hono/*
packages: Used for WebSocket and server implementations, essential for local testing@cloudflare/*
packages: Required for worker pool and types, used in core functionality@fireproof/*
packages: Core dependencies with active usage throughout the codebase🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check for known vulnerabilities pnpm audit # Check package sizes and their dependencies pnpm why @hono/node-server @hono/node-ws partyserver # Look for packages with similar functionality pnpm ls | grep -E 'server|ws|socket'Length of output: 2348
Script:
#!/bin/bash # Check for usage of Hono and WebSocket related code rg -l "from ['\"](\.\.\/)*@hono" --type ts rg -l "from ['\"](\.\.\/)*partyserver" --type ts # Look for WebSocket server implementations ast-grep --pattern 'new WebSocket($$$)' # Check package.json for related dev dependencies rg '"devDependencies"|"dependencies"' package.json -A 5 -B 5Length of output: 1287
Script:
#!/bin/bash # Search for Hono and WebSocket imports with fixed regex rg "from ['\"]@hono" --type ts rg "import.*from ['\"]@hono" --type ts # Look for server implementations ast-grep --pattern 'serve($$$)' # Check test files for usage fd -e test.ts -x cat {}Length of output: 44289
src/cloud/meta-merger/meta-by-tenant-ledger.ts (3)
44-55
: Assess the uniqueness constraints.Currently,
UNIQUE(metaCID)
can conflict with the primary key constraints(tenant, ledger, reqId, resId, metaCID)
. If entries share the samemetaCID
but differ in other fields, insertion will fail. Verify that this strict uniqueness is intended.
128-131
: Validate empty array behavior formetaCIDs
.When
t.metaCIDs
is empty, theNOT IN (SELECT value FROM json_each(?))
clause may behave unexpectedly. Consider handling this case separately to ensure correct deletions.
150-161
: Safeguard against malformed JSON.When parsing
row.meta
, malformed JSON in the database could cause runtime exceptions. Consider adding try/catch logic or validating the stored JSON.vitest.cf-worker.config.ts (2)
9-9
: Confirm the new wrangler config path.Ensure that
"./src/cloud/backend/wrangler.toml"
is valid and accessible across environments—particularly CI/CD—in order to avoid missing configuration errors.
12-14
: Check modified test inclusion patterns.You replaced the “cf-kv” name with “cf-worker” and narrowed the test files to
["src/cloud/meta-merger/*.test.ts"]
. This may skip existing tests outside that directory. Verify that you still intend to exclude the prior test patterns.vitest.workspace.ts (2)
12-12
: Confirm cfWorker import.You replaced the old cf-kv config with
cfWorker
. Make sure thatcfWorker
references the correct test configuration for this environment.
23-23
: Maintain consistent test coverage.Adding
cfWorker
config is helpful, but confirm whether any other test suites (e.g., commented-out connector) should also be reinstated or removed to keep coverage consistent.src/cloud/meta-merger/bettersql-abstract-sql.ts (1)
5-17
: Verify async usage with a synchronous library.
better-sqlite3
is generally synchronous, while your methods (run
,all
) areasync
. Ensure you do not expect concurrency benefits from awaiting calls to a synchronous library, as it still blocks the event loop.src/cloud/meta-merger/abstract-sql.ts (2)
39-44
: Validate table name if externally provided.
When passingtabName
toconditionalDrop
, ensure it cannot be manipulated to inject malicious SQL. IftabName
is user input, sanitize or validate it.
46-53
: Consider storing more robust date representations.
sqliteCoerceParams
convertsDate
objects to ISO strings. This is usually fine, but if time zones or high-precision timestamps matter, confirm that ISO date strings meet business requirements.src/cloud/backend/env.d.ts (1)
8-39
: Double-check optional environment variables.
Several environment variables (e.g.,TEST_DATE
,MAX_IDLE_TIME
) are optional. Confirm that all downstream code handlesundefined
values gracefully.src/cloud/backend/wrangler.toml (1)
29-30
: Avoid hardcoded credentials in configuration.
This practice has already been flagged in a previous review. Please see existing suggestions about using environment variables or secure vaults for credential management.src/cloud/meta-merger/tenant-ledger.ts (1)
56-60
:⚠️ Potential issueEnhance error handling for database operations
The
ensure
method should handle potential database errors, especially important for local Docker environment where network stability might differ from Cloudflare.Apply this diff to add proper error handling:
async ensure(t: TenantLedgerRow) { - this.tenantSql.ensure({ tenant: t.tenant, createdAt: t.createdAt }); - const stmt = this.sqlEnsureTenantLedger(); - return stmt.run(t.tenant, t.ledger, t.createdAt, t.tenant, t.ledger); + try { + await this.tenantSql.ensure({ tenant: t.tenant, createdAt: t.createdAt }); + const stmt = this.sqlEnsureTenantLedger(); + return await stmt.run(t.tenant, t.ledger, t.createdAt.toISOString(), t.tenant, t.ledger); + } catch (error) { + throw new Error(`Failed to ensure tenant ledger: ${error.message}`); + } }Likely invalid or redundant comment.
sqlEnsureMetaByTenantLedger(): SQLStatement { | ||
return this.#sqlInsertMetaByTenantLedger.once(() => { | ||
return this.db.prepare(` | ||
INSERT INTO MetaByTenantLedger(tenant, ledger, reqId, resId, metaCID, meta, updatedAt) | ||
VALUES(?, ?, ?, ?, ?, ?, ?) | ||
`); | ||
}); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider handling potential insert conflicts.
The INSERT INTO ...
statement may fail if entries with the same primary key or metaCID
already exist. Consider using ON CONFLICT
clauses or a check to skip duplicates before insertion.
async ensure(t: TenantRow) { | ||
const stmt = this.sqlEnsureTenant(); | ||
return stmt.run(t.tenant, t.createdAt, t.tenant); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling and retry logic for database writes.
The .run()
call for inserting a tenant does not have a failure-handling strategy. For mission-critical data, consider wrapping these inserts in try/catch blocks or adding retries to manage potential locks and concurrency errors.
async createSchema(drop = false) { | ||
for (const i of this.sql.metaSend.sqlCreateMetaSend(drop)) { | ||
await i.run(); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add transaction support for schema creation
When creating schema in local Docker environment, ensure atomic operations using transactions.
Apply this diff to wrap schema creation in a transaction:
async createSchema(drop = false) {
+ await this.db.exec('BEGIN TRANSACTION');
+ try {
for (const i of this.sql.metaSend.sqlCreateMetaSend(drop)) {
await i.run();
}
+ await this.db.exec('COMMIT');
+ } catch (error) {
+ await this.db.exec('ROLLBACK');
+ throw new Error(`Schema creation failed: ${error.message}`);
+ }
}
Committable suggestion skipped: line range outside the PR's diff.
async selectToAddSend(conn: ByConnection & { now: Date }): Promise<MetaSendRowWithMeta[]> { | ||
const stmt = this.sqlSelectToAddSend(); | ||
const rows = await stmt.all<SQLMetaSendRowWithMeta>( | ||
conn.reqId, | ||
conn.resId, | ||
conn.now, | ||
conn.tenant, | ||
conn.ledger, | ||
conn.reqId, | ||
conn.resId | ||
); | ||
return rows.map( | ||
(i) => | ||
({ | ||
metaCID: i.metaCID, | ||
reqId: i.reqId, | ||
resId: i.resId, | ||
sendAt: new Date(i.sendAt), | ||
meta: JSON.parse(i.meta) as CRDTEntry, | ||
}) satisfies MetaSendRowWithMeta | ||
); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add validation for date parsing
The date parsing in selectToAddSend
might fail silently in different timezone configurations.
Apply this diff to add proper date validation:
async selectToAddSend(conn: ByConnection & { now: Date }): Promise<MetaSendRowWithMeta[]> {
const stmt = this.sqlSelectToAddSend();
const rows = await stmt.all<SQLMetaSendRowWithMeta>(
conn.reqId,
conn.resId,
conn.now,
conn.tenant,
conn.ledger,
conn.reqId,
conn.resId
);
return rows.map(
(i) => {
+ let sendAt: Date;
+ try {
+ sendAt = new Date(i.sendAt);
+ if (isNaN(sendAt.getTime())) {
+ throw new Error(`Invalid date: ${i.sendAt}`);
+ }
+ } catch (error) {
+ throw new Error(`Failed to parse sendAt date: ${error.message}`);
+ }
return ({
metaCID: i.metaCID,
reqId: i.reqId,
resId: i.resId,
- sendAt: new Date(i.sendAt),
+ sendAt,
meta: JSON.parse(i.meta) as CRDTEntry,
}) satisfies MetaSendRowWithMeta
}
);
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
async selectToAddSend(conn: ByConnection & { now: Date }): Promise<MetaSendRowWithMeta[]> { | |
const stmt = this.sqlSelectToAddSend(); | |
const rows = await stmt.all<SQLMetaSendRowWithMeta>( | |
conn.reqId, | |
conn.resId, | |
conn.now, | |
conn.tenant, | |
conn.ledger, | |
conn.reqId, | |
conn.resId | |
); | |
return rows.map( | |
(i) => | |
({ | |
metaCID: i.metaCID, | |
reqId: i.reqId, | |
resId: i.resId, | |
sendAt: new Date(i.sendAt), | |
meta: JSON.parse(i.meta) as CRDTEntry, | |
}) satisfies MetaSendRowWithMeta | |
); | |
} | |
async selectToAddSend(conn: ByConnection & { now: Date }): Promise<MetaSendRowWithMeta[]> { | |
const stmt = this.sqlSelectToAddSend(); | |
const rows = await stmt.all<SQLMetaSendRowWithMeta>( | |
conn.reqId, | |
conn.resId, | |
conn.now, | |
conn.tenant, | |
conn.ledger, | |
conn.reqId, | |
conn.resId | |
); | |
return rows.map( | |
(i) => { | |
let sendAt: Date; | |
try { | |
sendAt = new Date(i.sendAt); | |
if (isNaN(sendAt.getTime())) { | |
throw new Error(`Invalid date: ${i.sendAt}`); | |
} | |
} catch (error) { | |
throw new Error(`Failed to parse sendAt date: ${error.message}`); | |
} | |
return ({ | |
metaCID: i.metaCID, | |
reqId: i.reqId, | |
resId: i.resId, | |
sendAt, | |
meta: JSON.parse(i.meta) as CRDTEntry, | |
}) satisfies MetaSendRowWithMeta | |
} | |
); | |
} |
async deleteByConnection(dmi: ByConnection & { metaCIDs: string[] }) { | ||
const stmt = this.sqlDeleteByMetaCID(); | ||
return stmt.run(dmi.tenant, dmi.ledger, dmi.reqId, dmi.resId, JSON.stringify(dmi.metaCIDs)); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add retry mechanism for connection issues
In a local Docker environment, network connectivity might be less stable than in Cloudflare.
Add a retry mechanism for the delete operation:
+ private async retry<T>(
+ operation: () => Promise<T>,
+ retries = 3,
+ delay = 1000
+ ): Promise<T> {
+ try {
+ return await operation();
+ } catch (error) {
+ if (retries === 0) throw error;
+ await new Promise(resolve => setTimeout(resolve, delay));
+ return this.retry(operation, retries - 1, delay * 2);
+ }
+ }
async deleteByConnection(dmi: ByConnection & { metaCIDs: string[] }) {
const stmt = this.sqlDeleteByMetaCID();
- return stmt.run(dmi.tenant, dmi.ledger, dmi.reqId, dmi.resId, JSON.stringify(dmi.metaCIDs));
+ return this.retry(() =>
+ stmt.run(dmi.tenant, dmi.ledger, dmi.reqId, dmi.resId, JSON.stringify(dmi.metaCIDs))
+ );
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
async deleteByConnection(dmi: ByConnection & { metaCIDs: string[] }) { | |
const stmt = this.sqlDeleteByMetaCID(); | |
return stmt.run(dmi.tenant, dmi.ledger, dmi.reqId, dmi.resId, JSON.stringify(dmi.metaCIDs)); | |
} | |
private async retry<T>( | |
operation: () => Promise<T>, | |
retries = 3, | |
delay = 1000 | |
): Promise<T> { | |
try { | |
return await operation(); | |
} catch (error) { | |
if (retries === 0) throw error; | |
await new Promise(resolve => setTimeout(resolve, delay)); | |
return this.retry(operation, retries - 1, delay * 2); | |
} | |
} | |
async deleteByConnection(dmi: ByConnection & { metaCIDs: string[] }) { | |
const stmt = this.sqlDeleteByMetaCID(); | |
return this.retry(() => | |
stmt.run(dmi.tenant, dmi.ledger, dmi.reqId, dmi.resId, JSON.stringify(dmi.metaCIDs)) | |
); | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (8)
src/cloud/hono-server.ts (1)
11-11
: Replace void with undefined in union typeUsing
void
in a union type can be confusing as it represents "no return value" rather than a value. Consider usingundefined
instead.-export type ConnMiddleware = (conn: WSConnection, c: Context, next: Next) => Promise<Response | void>; +export type ConnMiddleware = (conn: WSConnection, c: Context, next: Next) => Promise<Response | undefined>;🧰 Tools
🪛 Biome (1.9.4)
[error] 11-11: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
src/cloud/msg-dispatch.ts (2)
41-54
: Enhance WebSocket connection managementThe
WSConnection
class could benefit from connection state tracking and cleanup mechanisms.export class WSConnection { readonly conn: Connection; wspair?: WSPair; + private _state: 'connecting' | 'connected' | 'closed' = 'connecting'; + private _closeHandlers: Set<() => void> = new Set(); + constructor(conn: Connection) { this.conn = conn; } + attachWSPair(wsp: WSPair) { if (!this.wspair) { this.wspair = wsp; + this._state = 'connected'; + wsp.client.addEventListener('close', () => this._handleClose()); + wsp.server.addEventListener('close', () => this._handleClose()); } else { throw new Error("wspair already set"); } } + + private _handleClose() { + this._state = 'closed'; + this._closeHandlers.forEach(handler => handler()); + } + + onClose(handler: () => void): void { + this._closeHandlers.add(handler); + } + + get state(): string { + return this._state; + }
81-119
: Improve request validation and loggingThe dispatch method could benefit from:
- Request payload validation
- Detailed logging for debugging
- Request timing metrics
async dispatch(ctx: HonoServerImpl, msg: MsgBase, send: (msg: MsgBase) => void) { + const startTime = Date.now(); + const logContext = { + msgType: Object.keys(msg).find(k => k.startsWith('req')), + reqId: msg.reqId + }; + + try { + // Validate common message properties + if (!msg.reqId) { + throw new Error('Missing reqId in message'); + } + switch (true) { case MsgIsReqGestalt(msg): + this.logger.Debug().With(logContext).Msg('Processing gestalt request'); return send(buildResGestalt(msg, this.gestalt)); + case MsgIsReqOpen(msg): { + this.logger.Debug().With(logContext).Msg('Processing open request'); if (!msg.conn) { return send(buildErrorMsg(this.sthis, this.logger, msg, new Error("missing connection"))); }src/cloud/connection.test.ts (3)
34-51
: Consider using a more secure method for AWS credentials.The function retrieves AWS credentials directly from environment variables. While this is common, consider:
- Using AWS IAM roles for enhanced security in production
- Supporting AWS credential providers for local development
Would you like me to provide examples of implementing AWS credential providers for different environments?
76-86
: Consider adding more error scenarios.While the basic error cases (connection refused and timeout) are covered, consider adding tests for:
- Network disconnection during active connection
- Invalid protocol version
- Authentication failures
Would you like me to provide example test cases for these scenarios?
88-170
: Enhance connection lifecycle test coverage.Consider adding tests for:
- Connection cleanup after test completion
- Resource leakage scenarios
- Reconnection attempts
describe(`connection`, () => { let c: MsgConnection; + let resourceCount: number; + + beforeAll(() => { + resourceCount = process._getActiveHandles().length; + }); + beforeEach(async () => { const rC = await style.ok.open(); expect(rC.isOk()).toBeTruthy(); c = rC.Ok(); }); + afterEach(async () => { await c.close(); + // Ensure no resource leaks + expect(process._getActiveHandles().length).toBe(resourceCount); });src/cloud/backend/server.ts (2)
431-439
: Clean up commented code.There are large sections of commented code throughout the file. Consider:
- Removing commented code if it's no longer needed
- Moving work-in-progress code to feature branches
- Using version control to track code history instead of comments
Would you like me to help identify all sections of commented code that can be safely removed?
24-35
: Remove redundant case clause in switch statement.The
case "json"
clause is redundant as it performs the same action as thedefault
case.switch (env.FP_FORMAT) { case "jsonice": logger.SetFormatter(new JSONFormatter(logger.TxtEnDe(), 2)); break; case "yaml": logger.SetFormatter(new YAMLFormatter(logger.TxtEnDe(), 2)); break; - case "json": default: logger.SetFormatter(new JSONFormatter(logger.TxtEnDe())); break; }
🧰 Tools
🪛 Biome (1.9.4)
[error] 31-31: Useless case clause.
because the default clause is present:
Unsafe fix: Remove the useless case.
(lint/complexity/noUselessSwitchCase)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
src/cloud/backend/cf-hono-server.ts
(1 hunks)src/cloud/backend/server.ts
(1 hunks)src/cloud/backend/wrangler.toml
(1 hunks)src/cloud/connection.test.ts
(1 hunks)src/cloud/hono-server.ts
(1 hunks)src/cloud/meta-merger/meta-merger.test.ts
(1 hunks)src/cloud/msg-dispatch.ts
(1 hunks)src/cloud/node-hono-server.ts
(1 hunks)src/cloud/test-helper.ts
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- src/cloud/meta-merger/meta-merger.test.ts
- src/cloud/backend/cf-hono-server.ts
- src/cloud/backend/wrangler.toml
🧰 Additional context used
🪛 Biome (1.9.4)
src/cloud/backend/server.ts
[error] 31-31: Useless case clause.
because the default clause is present:
Unsafe fix: Remove the useless case.
(lint/complexity/noUselessSwitchCase)
src/cloud/hono-server.ts
[error] 11-11: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
🔇 Additional comments (15)
src/cloud/hono-server.ts (5)
21-27
: CORS configuration needs environment-specific settingsThe current CORS configuration uses permissive settings that are not environment-aware. Consider:
- Moving CORS configuration to environment-specific settings
- Using environment variables to specify allowed domains for production
- Adding
Access-Control-Allow-Headers
to specify allowed headers
76-76
: Remove non-null assertion operatorThe non-null assertion operator (
!
) onreqOpen.conn
could lead to runtime errors. Consider using proper null checking.
99-102
: Implement proper cleanup in WebSocket onClose handlerThe onClose handler needs proper cleanup:
- Remove the connection from the dispatcher
- Clean up any resources
- Log the connection closure with relevant details
101-101
:⚠️ Potential issueReplace unsafe type casting with proper cleanup
Using
undefined as unknown as MsgDispatcher
is an unsafe type cast that could lead to runtime errors.-dp = undefined as unknown as MsgDispatcher; +dp.close(); // Add a close method to MsgDispatcher +dp = null;Likely invalid or redundant comment.
115-122
: Verify implementation status of honoServer functionThe commented out
honoServer
function appears to be important for runtime detection and server implementation selection. Given the PR's objective to transition from Cloudflare to local Docker setup, this implementation might be necessary.✅ Verification successful
Let me check the actual implementations to better understand the status.
Let me check one more thing to understand if there's an active implementation of honoServer function.
Commented honoServer function can be safely removed
The commented out
honoServer
function insrc/cloud/hono-server.ts
is no longer needed as the implementation has been replaced with a factory pattern. The functionality is now handled by:
NodeHonoServerFactory
intest-helper.ts
which creates instances ofHonoServer
withNodeHonoServer
implementation- A fully implemented
NodeHonoServer
class innode-hono-server.ts
The code can be safely removed as it represents an older implementation pattern that has been superseded by the factory-based approach, which is actively used in the tests.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check if there are any other implementations of honoServer rg -l "honoServer" --type ts # Check for NodeHonoServer implementation rg -l "NodeHonoServer" --type tsLength of output: 330
Script:
#!/bin/bash # Check the implementation in node-hono-server.ts rg "class NodeHonoServer" -A 5 src/cloud/node-hono-server.ts # Check if honoServer is implemented elsewhere rg "function honoServer" -A 5 src/cloud/backend/server.ts # Check the test usage rg "honoServer" -A 5 src/cloud/connection.test.tsLength of output: 1237
Script:
#!/bin/bash # Check for any active (uncommented) honoServer implementation rg "export (async )?function honoServer" --type ts -A 5 # Check the test helper to understand the factory pattern rg "NodeHonoServerFactory" -A 5 src/cloud/test-helper.tsLength of output: 867
src/cloud/node-hono-server.ts (4)
27-36
: 🛠️ Refactor suggestionAdd validation for required AWS credentials
The method assumes AWS credentials are present in environment variables without validation. Missing or invalid credentials could cause runtime errors.
calculatePreSignedUrl(p: PreSignedConnMsg): Promise<Result<URI>> { + const requiredEnvVars = ['STORAGE_URL', 'ACCESS_KEY_ID', 'SECRET_ACCESS_KEY', 'REGION']; + const missing = requiredEnvVars.filter(v => !this.env[v]); + if (missing.length > 0) { + return Promise.resolve(Result.err( + new Error(`Missing required environment variables: ${missing.join(', ')}`) + )); + } return calculatePreSignedUrl(p, { storageUrl: URI.from(this.env.STORAGE_URL), aws: {Likely invalid or redundant comment.
14-19
: 🛠️ Refactor suggestionImprove initialization safety of server properties
The use of non-null assertions (!) for required properties is risky. These properties should be properly initialized in the constructor or marked as potentially undefined.
- _upgradeWebSocket!: UpgradeWebSocket; - _injectWebSocket!: (t: unknown) => void; - _serve!: serveFn; - _server!: ServerType; + private _upgradeWebSocket?: UpgradeWebSocket; + private _injectWebSocket?: (t: unknown) => void; + private _serve?: serveFn; + private _server?: ServerType;Likely invalid or redundant comment.
60-65
: 🛠️ Refactor suggestionEnhance server shutdown with timeout and cleanup
The server shutdown lacks a timeout mechanism and proper cleanup. Also, remove the commented-out code.
-async close(): Promise<void> { - this._server.close(() => { - /* */ - }); - // return new Promise((res) => this._server.close(() => res())); -} +async close(timeoutMs: number = 5000): Promise<void> { + if (!this._server) return; + + const timeout = new Promise((_, reject) => + setTimeout(() => reject(new Error('Server close timeout')), timeoutMs) + ); + + try { + await Promise.race([ + new Promise<void>((resolve) => { + this._server.close(() => resolve()); + }), + timeout + ]); + } finally { + this._server = undefined; + } +}Likely invalid or redundant comment.
54-59
:⚠️ Potential issueAdd error handling and timeout for server startup
The server startup process lacks error handling and timeout mechanism, which could leave the application in an inconsistent state if initialization fails.
async serve(app: Hono, port: number): Promise<void> { - await new Promise<void>((resolve) => { - this._server = this._serve({ fetch: app.fetch, port }, () => resolve()); - }); - this._injectWebSocket(this._server); + const timeout = 30000; // 30 seconds + try { + await Promise.race([ + new Promise<void>((resolve, reject) => { + try { + this._server = this._serve({ fetch: app.fetch, port }, () => resolve()); + } catch (err) { + reject(new Error(`Failed to start server: ${err}`)); + } + }), + new Promise((_, reject) => + setTimeout(() => reject(new Error('Server startup timeout')), timeout) + ) + ]); + if (!this._server) { + throw new Error('Server failed to initialize'); + } + this._injectWebSocket(this._server); + } catch (err) { + await this.close().catch(() => {}); // Cleanup on failure + throw err; + }Likely invalid or redundant comment.
src/cloud/msg-dispatch.ts (1)
70-79
: 🛠️ Refactor suggestionEnhance DDoS protection implementation
The current DDoS protection is limited to connection ID validation. Consider implementing:
- Request rate limiting
- Connection pooling
- Resource usage monitoring
- Timeout mechanisms for inactive connections
+ private static readonly MAX_CONNECTIONS = 1000; + private static readonly RATE_LIMIT_WINDOW = 60000; // 1 minute + private static readonly MAX_REQUESTS_PER_WINDOW = 100; + + private _connectionCount = 0; + private _requestCounts = new Map<string, { count: number; windowStart: number }>(); + addConn(aConn: Connection): Result<WSConnection> { + // Check connection limit + if (this._connectionCount >= MsgDispatcher.MAX_CONNECTIONS) { + return this.logger.Error().Msg('Maximum connections reached').ResultError(); + } + + // Rate limiting + const now = Date.now(); + const reqCount = this._requestCounts.get(aConn.reqId) ?? { count: 0, windowStart: now }; + if (now - reqCount.windowStart > MsgDispatcher.RATE_LIMIT_WINDOW) { + reqCount.count = 1; + reqCount.windowStart = now; + } else if (reqCount.count >= MsgDispatcher.MAX_REQUESTS_PER_WINDOW) { + return this.logger.Error().Msg('Rate limit exceeded').ResultError(); + } else { + reqCount.count++; + } + this._requestCounts.set(aConn.reqId, reqCount); + if (!this.wsConn) { this.wsConn = new WSConnection({ ...aConn, resId: this.sthis.nextId().str }); + this._connectionCount++; + this.wsConn.onClose(() => this._connectionCount--); return Result.Ok(this.wsConn); }Likely invalid or redundant comment.
src/cloud/test-helper.ts (1)
131-141
:⚠️ Potential issueAdd error handling for TOML file operations
The function lacks error handling for file reading and parsing operations.
export async function resolveToml() { const tomlFile = "src/cloud/backend/wrangler.toml"; - const tomeStr = await fs.readFile(tomlFile, "utf-8"); - const wranglerFile = toml.parse(tomeStr) as unknown as { - env: { "test-reqRes": { vars: Env } }; - }; + let tomeStr: string; + try { + tomeStr = await fs.readFile(tomlFile, "utf-8"); + } catch (err) { + throw new Error(`Failed to read TOML file at ${tomlFile}: ${err.message}`); + } + + let wranglerFile: { env: { "test-reqRes": { vars: Env } } }; + try { + wranglerFile = toml.parse(tomeStr) as unknown as { + env: { "test-reqRes": { vars: Env } }; + }; + } catch (err) { + throw new Error(`Failed to parse TOML file: ${err.message}`); + } + + if (!wranglerFile?.env?.["test-reqRes"]?.vars) { + throw new Error('Invalid TOML structure: missing required configuration'); + } + return { tomlFile, env: wranglerFile.env["test-reqRes"].vars, }; }Likely invalid or redundant comment.
src/cloud/connection.test.ts (2)
53-66
: Well-structured test setup!Good practices observed:
- Comprehensive testing of both HTTP and WebSocket protocols
- Proper test isolation with separate configurations
- Random port allocation for parallel test execution
193-224
: Strengthen data operation test coverage.The current tests only verify URL signing. Consider adding:
- Actual data operation verification
- Edge cases (empty data, large payloads)
- Concurrent operation handling
src/cloud/backend/server.ts (2)
219-231
: Empty catch block needs attention inwebSocketClose
.The empty catch block silently ignores all errors. Consider logging specific error types that are expected to be safely ignored.
1-12
:⚠️ Potential issueRemove Cloudflare dependencies for local Docker transition.
Since the PR's objective is to transition to a local Docker setup, the following should be addressed:
- Remove Cloudflare Workers types reference
- Replace
DurableObject
import with local implementation- Update server implementation for Docker environment
-// / <reference types="@cloudflare/workers-types" /> -import { DurableObject } from "cloudflare:workers"; +import { LocalDurableObject } from "../local/durable-object.js";Likely invalid or redundant comment.
src/cloud/hono-server.ts
Outdated
await dp.dispatch(this.impl, rMsg.Ok(), (msg) => { | ||
const str = this.msgP.ende.encode(msg); | ||
ws.send(str); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling for message encoding
The message encoding process lacks error handling, which could lead to unhandled exceptions.
await dp.dispatch(this.impl, rMsg.Ok(), (msg) => {
- const str = this.msgP.ende.encode(msg);
- ws.send(str);
+ try {
+ const str = this.msgP.ende.encode(msg);
+ ws.send(str);
+ } catch (error) {
+ this.logger.Error().Err(error).Msg("Failed to encode or send message");
+ ws.send(this.msgP.ende.encode(buildErrorMsg(this.sthis, this.logger, msg, error)));
+ }
});
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
await dp.dispatch(this.impl, rMsg.Ok(), (msg) => { | |
const str = this.msgP.ende.encode(msg); | |
ws.send(str); | |
}); | |
await dp.dispatch(this.impl, rMsg.Ok(), (msg) => { | |
try { | |
const str = this.msgP.ende.encode(msg); | |
ws.send(str); | |
} catch (error) { | |
this.logger.Error().Err(error).Msg("Failed to encode or send message"); | |
ws.send(this.msgP.ende.encode(buildErrorMsg(this.sthis, this.logger, msg, error))); | |
} | |
}); |
src/cloud/test-helper.ts
Outdated
export function CFHonoServerFactory() { | ||
return { | ||
name: "CFHonoServer", | ||
factory: async (_sthis: SuperThis, _msgP: MsgerParams, remoteGestalt: Gestalt, port: number) => { | ||
if (process.env.FP_WRANGLER_PORT) { | ||
const hs = { | ||
start: async () => { | ||
return hs; | ||
}, | ||
close: async () => { | ||
/* */ | ||
}, | ||
} as unknown as HonoServer; | ||
return hs; | ||
} | ||
const { tomlFile } = await resolveToml(); | ||
$.verbose = !!process.env.FP_DEBUG; | ||
const runningWrangler = $` | ||
wrangler dev -c ${tomlFile} --port ${port} --env test-${remoteGestalt.protocolCapabilities[0]} --no-show-interactive-dev-session & | ||
waitPid=$! | ||
echo "PID:$waitPid" | ||
wait $waitPid`; | ||
const waitReady = new Future(); | ||
let pid: number | undefined; | ||
runningWrangler.stdout.on("data", (chunk) => { | ||
// console.log(">>", chunk.toString()) | ||
const mightPid = chunk.toString().match(/PID:(\d+)/)?.[1]; | ||
if (mightPid) { | ||
pid = +mightPid; | ||
} | ||
if (chunk.includes("Ready on http")) { | ||
waitReady.resolve(true); | ||
} | ||
}); | ||
runningWrangler.stderr.on("data", (chunk) => { | ||
// eslint-disable-next-line no-console | ||
console.error("!!", chunk.toString()); | ||
}); | ||
await waitReady.asPromise(); | ||
const hs = { | ||
start: async () => { | ||
return hs; | ||
}, | ||
close: async () => { | ||
if (pid) process.kill(pid); | ||
}, | ||
} as unknown as HonoServer; | ||
return hs; | ||
}, | ||
}; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve process management and error handling
The server factory needs improvements in:
- Process cleanup on errors
- Timeout handling for server startup
- Proper error propagation
export function CFHonoServerFactory() {
+ const STARTUP_TIMEOUT = 30000; // 30 seconds
+
return {
name: "CFHonoServer",
factory: async (_sthis: SuperThis, _msgP: MsgerParams, remoteGestalt: Gestalt, port: number) => {
+ let serverProcess: { pid?: number } = {};
+
if (process.env.FP_WRANGLER_PORT) {
const hs = {
start: async () => {
return hs;
},
close: async () => {
- /* */
+ if (serverProcess.pid) {
+ process.kill(serverProcess.pid);
+ }
},
} as unknown as HonoServer;
return hs;
}
+
const { tomlFile } = await resolveToml();
$.verbose = !!process.env.FP_DEBUG;
+
+ const startupTimeout = new Promise((_, reject) =>
+ setTimeout(() => reject(new Error('Server startup timeout')), STARTUP_TIMEOUT)
+ );
+
const runningWrangler = $`
wrangler dev -c ${tomlFile} --port ${port} --env test-${remoteGestalt.protocolCapabilities[0]} --no-show-interactive-dev-session &
waitPid=$!
echo "PID:$waitPid"
wait $waitPid`;
+
const waitReady = new Future();
- let pid: number | undefined;
runningWrangler.stdout.on("data", (chunk) => {
const mightPid = chunk.toString().match(/PID:(\d+)/)?.[1];
if (mightPid) {
- pid = +mightPid;
+ serverProcess.pid = +mightPid;
}
if (chunk.includes("Ready on http")) {
waitReady.resolve(true);
}
});
+
runningWrangler.stderr.on("data", (chunk) => {
- // eslint-disable-next-line no-console
- console.error("!!", chunk.toString());
+ const error = chunk.toString();
+ console.error("Server error:", error);
+ if (!waitReady.isResolved()) {
+ waitReady.reject(new Error(`Server startup failed: ${error}`));
+ }
});
- await waitReady.asPromise();
+
+ try {
+ await Promise.race([waitReady.asPromise(), startupTimeout]);
+ } catch (error) {
+ if (serverProcess.pid) {
+ process.kill(serverProcess.pid);
+ }
+ throw error;
+ }
+
const hs = {
start: async () => {
return hs;
},
close: async () => {
- if (pid) process.kill(pid);
+ if (serverProcess.pid) {
+ process.kill(serverProcess.pid);
+ serverProcess.pid = undefined;
+ }
},
} as unknown as HonoServer;
return hs;
},
};
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
export function CFHonoServerFactory() { | |
return { | |
name: "CFHonoServer", | |
factory: async (_sthis: SuperThis, _msgP: MsgerParams, remoteGestalt: Gestalt, port: number) => { | |
if (process.env.FP_WRANGLER_PORT) { | |
const hs = { | |
start: async () => { | |
return hs; | |
}, | |
close: async () => { | |
/* */ | |
}, | |
} as unknown as HonoServer; | |
return hs; | |
} | |
const { tomlFile } = await resolveToml(); | |
$.verbose = !!process.env.FP_DEBUG; | |
const runningWrangler = $` | |
wrangler dev -c ${tomlFile} --port ${port} --env test-${remoteGestalt.protocolCapabilities[0]} --no-show-interactive-dev-session & | |
waitPid=$! | |
echo "PID:$waitPid" | |
wait $waitPid`; | |
const waitReady = new Future(); | |
let pid: number | undefined; | |
runningWrangler.stdout.on("data", (chunk) => { | |
// console.log(">>", chunk.toString()) | |
const mightPid = chunk.toString().match(/PID:(\d+)/)?.[1]; | |
if (mightPid) { | |
pid = +mightPid; | |
} | |
if (chunk.includes("Ready on http")) { | |
waitReady.resolve(true); | |
} | |
}); | |
runningWrangler.stderr.on("data", (chunk) => { | |
// eslint-disable-next-line no-console | |
console.error("!!", chunk.toString()); | |
}); | |
await waitReady.asPromise(); | |
const hs = { | |
start: async () => { | |
return hs; | |
}, | |
close: async () => { | |
if (pid) process.kill(pid); | |
}, | |
} as unknown as HonoServer; | |
return hs; | |
}, | |
}; | |
} | |
export function CFHonoServerFactory() { | |
const STARTUP_TIMEOUT = 30000; // 30 seconds | |
return { | |
name: "CFHonoServer", | |
factory: async (_sthis: SuperThis, _msgP: MsgerParams, remoteGestalt: Gestalt, port: number) => { | |
let serverProcess: { pid?: number } = {}; | |
if (process.env.FP_WRANGLER_PORT) { | |
const hs = { | |
start: async () => { | |
return hs; | |
}, | |
close: async () => { | |
if (serverProcess.pid) { | |
process.kill(serverProcess.pid); | |
} | |
}, | |
} as unknown as HonoServer; | |
return hs; | |
} | |
const { tomlFile } = await resolveToml(); | |
$.verbose = !!process.env.FP_DEBUG; | |
const startupTimeout = new Promise((_, reject) => | |
setTimeout(() => reject(new Error('Server startup timeout')), STARTUP_TIMEOUT) | |
); | |
const runningWrangler = $` | |
wrangler dev -c ${tomlFile} --port ${port} --env test-${remoteGestalt.protocolCapabilities[0]} --no-show-interactive-dev-session & | |
waitPid=$! | |
echo "PID:$waitPid" | |
wait $waitPid`; | |
const waitReady = new Future(); | |
runningWrangler.stdout.on("data", (chunk) => { | |
const mightPid = chunk.toString().match(/PID:(\d+)/)?.[1]; | |
if (mightPid) { | |
serverProcess.pid = +mightPid; | |
} | |
if (chunk.includes("Ready on http")) { | |
waitReady.resolve(true); | |
} | |
}); | |
runningWrangler.stderr.on("data", (chunk) => { | |
const error = chunk.toString(); | |
console.error("Server error:", error); | |
if (!waitReady.isResolved()) { | |
waitReady.reject(new Error(`Server startup failed: ${error}`)); | |
} | |
}); | |
try { | |
await Promise.race([waitReady.asPromise(), startupTimeout]); | |
} catch (error) { | |
if (serverProcess.pid) { | |
process.kill(serverProcess.pid); | |
} | |
throw error; | |
} | |
const hs = { | |
start: async () => { | |
return hs; | |
}, | |
close: async () => { | |
if (serverProcess.pid) { | |
process.kill(serverProcess.pid); | |
serverProcess.pid = undefined; | |
} | |
}, | |
} as unknown as HonoServer; | |
return hs; | |
}, | |
}; | |
} |
src/cloud/connection.test.ts
Outdated
describe("WAL", async () => { | ||
it("Get", async () => { | ||
const sp = sup(); | ||
const res = await conn.request(buildReqGetWAL(sthis, sp, conn.conn), { waitFor: MsgIsResGetWAL }); | ||
if (MsgIsResGetWAL(res)) { | ||
// expect(res.params).toEqual(sp); | ||
expect(URI.from(res.signedUrl).asObj()).toEqual(await refURL(res)); | ||
} else { | ||
assert.fail("expected MsgResGetWAL", JSON.stringify(res)); | ||
} | ||
}); | ||
it("Put", async () => { | ||
const sp = sup(); | ||
const res = await conn.request(buildReqPutWAL(sthis, sp, conn.conn), { waitFor: MsgIsResPutWAL }); | ||
if (MsgIsResPutWAL(res)) { | ||
// expect(res.params).toEqual(sp); | ||
expect(URI.from(res.signedUrl).asObj()).toEqual(await refURL(res)); | ||
} else { | ||
assert.fail("expected MsgResPutWAL", JSON.stringify(res)); | ||
} | ||
}); | ||
it("Del", async () => { | ||
const sp = sup(); | ||
const res = await conn.request(buildReqDelWAL(sthis, sp, conn.conn), { waitFor: MsgIsResDelWAL }); | ||
if (MsgIsResDelWAL(res)) { | ||
// expect(res.params).toEqual(sp); | ||
expect(URI.from(res.signedUrl).asObj()).toEqual(await refURL(res)); | ||
} else { | ||
assert.fail("expected MsgResDelWAL", JSON.stringify(res)); | ||
} | ||
}); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add WAL-specific test scenarios.
Current WAL tests only verify URL signing. Consider adding tests for:
- WAL entry ordering
- Conflict resolution
- Recovery scenarios
Would you like me to provide example test cases for these WAL-specific scenarios?
src/cloud/backend/server.ts
Outdated
const app = new Hono(); | ||
const once = new ResolveOnce(); | ||
|
||
export default { | ||
fetch: async (req, env, ctx): Promise<Response> => { | ||
const logger = ensureLogger(env, "CF-Fireproof"); | ||
await once.once(() => { | ||
const sthis = ensureSuperThis({ logger }); | ||
const msgP = defaultMsgParams(sthis, { | ||
hasPersistent: true, | ||
protocolCapabilities: env.FP_PROTOCOL | ||
? env.FP_PROTOCOL === "ws" | ||
? ["stream"] | ||
: ["reqRes"] | ||
: ["reqRes", "stream"], | ||
}); | ||
const gs = defaultGestalt(msgP, { | ||
id: env.FP_PROTOCOL ? (env.FP_PROTOCOL === "http" ? "HTTP-server" : "WS-server") : "FP-CF-Server", | ||
}); | ||
const honoServer = new HonoServer(sthis, msgP, gs, new CFHonoServer(sthis, msgP.ende, env)); | ||
return honoServer.start(app); | ||
}); | ||
return app.fetch(req, env, ctx); | ||
}, | ||
} satisfies ExportedHandler<Env>; | ||
/* |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add environment validation for protocol configuration.
The current implementation assumes valid environment values. Consider:
- Validating
FP_PROTOCOL
against allowed values - Providing meaningful error messages for invalid configurations
- Adding default fallback values
+const VALID_PROTOCOLS = ["http", "ws"] as const;
+type Protocol = typeof VALID_PROTOCOLS[number];
+
+function validateProtocol(protocol: string | undefined): Protocol {
+ if (!protocol || !VALID_PROTOCOLS.includes(protocol as Protocol)) {
+ logger.Warn()
+ .Str("protocol", protocol || "undefined")
+ .Msg("Invalid protocol, falling back to http");
+ return "http";
+ }
+ return protocol as Protocol;
+}
+
const app = new Hono();
const once = new ResolveOnce();
Committable suggestion skipped: line range outside the PR's diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 10
♻️ Duplicate comments (4)
src/cloud/backend/server.ts (1)
1-4
:⚠️ Potential issueRemove Cloudflare dependencies for Docker transition.
The file still contains Cloudflare-specific imports that need to be replaced for the Docker transition:
-// / <reference types="@cloudflare/workers-types" /> -import { DurableObject } from "cloudflare:workers"; +import { NodeHonoServer } from "../node-hono-server.js";src/cloud/client/gateway.ts (1)
212-219
:⚠️ Potential issueFix incorrect message filter in delConn.
The delete operation uses MsgIsResPutData for verification, which is incorrect:
const rResSignedUrl = await this.getResSignedUrl<ResDelData>( "reqDelData", "DELETE", "data", - MsgIsResPutData, + MsgIsResDelData, uri, conn );src/cloud/msger.ts (2)
31-33
:⚠️ Potential issueAdd input validation to selectRandom function.
The function could throw if the input array is empty, which is critical as it's used for WebSocket endpoint selection.
export function selectRandom<T>(arr: T[]): T { + if (!arr || arr.length === 0) { + throw new Error('Cannot select from empty array'); + } return arr[Math.floor(Math.random() * arr.length)]; }
92-103
:⚠️ Potential issueEnsure proper cleanup on failed connection starts.
When transitioning to Docker, connection failures during container startup are more likely. The current implementation doesn't clean up resources on failure.
export async function applyStart(prC: Promise<Result<MsgRawConnection>>): Promise<Result<MsgRawConnection>> { const rC = await prC; if (rC.isErr()) { + await rC.Ok()?.close().catch(() => {}); // Best effort cleanup return rC; } const c = rC.Ok(); const r = await c.start(); if (r.isErr()) { + await c.close().catch(() => {}); // Best effort cleanup return Result.Err(r.Err()); } return rC; }
🧹 Nitpick comments (13)
src/cloud/msg-types-data.ts (3)
22-24
: Consider refactoring to reduce code duplication in request builders.The functions
buildReqGetData
,buildReqPutData
, andbuildReqDelData
share similar logic in constructing request objects. To adhere to the DRY (Don't Repeat Yourself) principle, consider refactoring these functions into a single generic function that can handle different request types.Here is an example of how you might implement this:
function buildReqData<T extends ReqSignedUrl>( sthis: NextId, type: T['type'], sup: ReqSignedUrlParam, ctx: GwCtx ): T { return buildReqSignedUrl<T>(sthis, type, sup, ctx); }You can then refactor the specific request builders:
-export function buildReqGetData(sthis: NextId, sup: ReqSignedUrlParam, ctx: GwCtx): ReqGetData { - return buildReqSignedUrl<ReqGetData>(sthis, "reqGetData", sup, ctx); -} +export const buildReqGetData = (sthis: NextId, sup: ReqSignedUrlParam, ctx: GwCtx): ReqGetData => + buildReqData<ReqGetData>(sthis, "reqGetData", sup, ctx);Repeat similarly for
buildReqPutData
andbuildReqDelData
.Also applies to: 61-63, 90-92
26-28
: Consolidate type guard functions for request messages.The type guard functions
MsgIsReqGetData
,MsgIsReqPutData
, andMsgIsReqDelData
have identical implementations except for the message type string. You can create a generic type guard function to reduce code duplication.Example implementation:
function msgIsReqData<T extends MsgBase>(msg: MsgBase, type: string): msg is T { return msg.type === type; }Refactor the specific type guards:
-export function MsgIsReqGetData(msg: MsgBase): msg is ReqGetData { - return msg.type === "reqGetData"; -} +export const MsgIsReqGetData = (msg: MsgBase): msg is ReqGetData => + msgIsReqData<ReqGetData>(msg, "reqGetData");Repeat similarly for
MsgIsReqPutData
andMsgIsReqDelData
.Also applies to: 57-59, 86-88
43-50
: Refactor response builders to eliminate repetition.The functions
buildResGetData
,buildResPutData
, andbuildResDelData
are structurally similar. Consider creating a generic response builder function to avoid code duplication and improve maintainability.Example of a generic response builder:
function buildResData<ReqT extends MsgBase, ResT extends ResSignedUrl>( method: string, store: string, type: ResT['type'], sthis: SuperThis, logger: Logger, req: MsgWithConn<ReqT>, ctx: CalculatePreSignedUrl ): Promise<MsgWithError<ResT>> { return buildRes<MsgWithConn<ReqT>, ResT>(method, store, type, sthis, logger, req, ctx); }Refactor the specific response builders:
-export function buildResGetData( - sthis: SuperThis, - logger: Logger, - req: MsgWithConn<ReqGetData>, - ctx: CalculatePreSignedUrl -): Promise<MsgWithError<ResGetData>> { - return buildRes<MsgWithConn<ReqGetData>, ResGetData>("GET", "data", "resGetData", sthis, logger, req, ctx); -} +export const buildResGetData = ( + sthis: SuperThis, + logger: Logger, + req: MsgWithConn<ReqGetData>, + ctx: CalculatePreSignedUrl +): Promise<MsgWithError<ResGetData>> => + buildResData<ReqGetData, ResGetData>("GET", "data", "resGetData", sthis, logger, req, ctx);Repeat similarly for
buildResPutData
andbuildResDelData
.Also applies to: 73-80, 102-109
src/cloud/msg-type-meta.ts (2)
17-64
: Remove commented-out code to improve readability.The code between lines 17 and 64 consists of large blocks of commented-out code. Keeping outdated or unused code can clutter the codebase and make maintenance harder. If this code is no longer needed, consider removing it entirely. If it's needed for future reference, it might be better placed in version control history or documentation.
225-229
: Remove unused type aliases and functions.The type alias
ReqDelMetaWithConnId
and the functionMsgIsReqDelMetaWithConnId
are commented out. If these are not intended for future use, it's better to remove them to keep the codebase clean.Also applies to: 227-229
src/cloud/msg-raw-connection-base.ts (1)
7-13
: Document class properties and constructor parameters.To enhance code readability and maintainability, add comments or TypeScript docstrings to explain the purpose of the
sthis
andexchangedGestalt
properties, as well as their roles in the constructor.Example:
/** * SuperThis instance for generating unique identifiers and logging. */ readonly sthis: SuperThis; /** * Contains the negotiated capabilities and configurations between client and server. */ readonly exchangedGestalt: ExchangedGestalt;src/cloud/http-connection.ts (1)
81-81
: Parameterize the HTTP method.The request method is hardcoded to "PUT". Consider making it configurable to support different HTTP methods based on the operation type.
Apply this diff to parameterize the HTTP method:
- method: "PUT", + method: _opts.httpMethod || "PUT", // Default to PUT if not specifiedsrc/cloud/backend/cf-hono-server.ts (1)
42-49
: Clean up commented code and improve error handlingThe inject method contains commented code and could benefit from better error handling when accessing environment variables.
inject(c: Context, fn: (rt: RunTimeParams) => Promise<Response | void>): Promise<Response | void> { - // this._env = c.env const sthis = ensureSuperThis({ logger: new LoggerImpl(), }); - sthis.env.sets(c.env); + try { + sthis.env.sets(c.env); + } catch (error) { + logger.Error().Err(error).Msg("Failed to set environment variables"); + return Promise.reject(error); + } const logger = ensureLogger(sthis, `CFHono[${URI.from(c.req.url).pathname}]`);src/cloud/hono-server.ts (1)
98-109
: Enhance error handling and security headersThe request processing could benefit from more specific error handling and additional security headers.
app.put("/fp", (c) => this.factory.inject(c, async ({ sthis, logger, impl }) => { impl.headers.Items().forEach(([k, v]) => c.res.headers.set(k, v[0])); + // Add security headers + c.res.headers.set('X-Content-Type-Options', 'nosniff'); + c.res.headers.set('X-Frame-Options', 'DENY'); const rMsg = await exception2Result(() => c.req.json() as Promise<MsgBase>); if (rMsg.isErr()) { c.status(400); + logger.Error().Err(rMsg.Err()).Msg("Invalid JSON payload"); return c.json(buildErrorMsg(sthis, logger, { tid: "internal" }, rMsg.Err())); }src/cloud/connection.test.ts (2)
242-273
: Enhance WAL testing coverageThe WAL tests only verify URL signing. Consider adding tests for WAL-specific functionality.
Additional test cases should include:
- WAL entry ordering
- Conflict resolution
- Recovery scenarios
Would you like me to provide example test cases for these WAL-specific scenarios?
75-78
: Improve server cleanup in testsThe afterAll cleanup could be more robust to ensure proper resource cleanup.
afterAll(async () => { - // console.log("closing server"); - await server.close(); + try { + await server.close(); + } catch (error) { + console.error('Failed to close server:', error); + } finally { + server = undefined; + } });src/cloud/msg-types.ts (2)
351-362
: Improve error handling in buildResOpen.Use optional chaining for safer property access:
- if (!(req.conn && req.conn.reqId)) { + if (!req.conn?.reqId) { throw new Error("req.conn.reqId is required"); }🧰 Tools
🪛 Biome (1.9.4)
[error] 351-351: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
456-478
: Enhance error message construction.The buildErrorMsg function could provide more detailed error information:
const msg = { src: base, type: "error", tid: base.tid || "internal", - message: error.message, + message: `${error.name}: ${error.message}`, version: VERSION, body, stack, + timestamp: new Date().toISOString(), } satisfies ErrorMsg;
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (22)
package.json
(5 hunks)setup.cloud.ts
(1 hunks)src/cloud/backend/cf-hono-server.ts
(1 hunks)src/cloud/backend/server.ts
(1 hunks)src/cloud/client/cloud-gateway.test.ts
(1 hunks)src/cloud/client/gateway.ts
(1 hunks)src/cloud/connection.test.ts
(1 hunks)src/cloud/hono-server.ts
(1 hunks)src/cloud/http-connection.ts
(1 hunks)src/cloud/meta-merger/meta-merger.test.ts
(1 hunks)src/cloud/meta-merger/meta-merger.ts
(1 hunks)src/cloud/msg-dispatch.ts
(1 hunks)src/cloud/msg-raw-connection-base.ts
(1 hunks)src/cloud/msg-type-meta.ts
(1 hunks)src/cloud/msg-types-data.ts
(1 hunks)src/cloud/msg-types-wal.ts
(1 hunks)src/cloud/msg-types.ts
(1 hunks)src/cloud/msger.ts
(1 hunks)src/cloud/node-hono-server.ts
(1 hunks)src/cloud/pre-signed-url.ts
(1 hunks)src/cloud/test-helper.ts
(1 hunks)src/cloud/ws-connection.ts
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- src/cloud/client/cloud-gateway.test.ts
- package.json
🧰 Additional context used
🪛 Biome (1.9.4)
src/cloud/meta-merger/meta-merger.test.ts
[error] 18-18: Avoid the use of spread (...
) syntax on accumulators.
Spread syntax should be avoided on accumulators (like those in .reduce
) because it causes a time complexity of O(n^2)
.
Consider methods such as .splice or .push instead.
(lint/performance/noAccumulatingSpread)
src/cloud/node-hono-server.ts
[error] 35-35: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
[error] 35-35: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
src/cloud/hono-server.ts
[error] 17-17: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
[error] 64-64: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
[error] 64-64: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
src/cloud/backend/cf-hono-server.ts
[error] 41-41: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
[error] 41-41: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
src/cloud/msg-types.ts
[error] 351-351: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
🔇 Additional comments (24)
src/cloud/msg-type-meta.ts (3)
66-70
: Ensure consistency in interface extensions.The
ReqPutMeta
interface extends bothMsgWithTenantLedger
andMsgWithOptionalConn
, while also including properties likeparams
andmetas
. Verify that this interface correctly represents the required structure and that all necessary properties are included.
145-158
: Clarify the purpose ofUpdateMetaEvent
and its construction.The
UpdateMetaEvent
interface and thebuildUpdateMetaEvent
function are crucial for meta update notifications. Ensure that theconsub
parameter correctly includes bothconnId
andsubscriberId
, and that these values are properly managed. Double-check thatrq.req.metas
in line 155 is the desired source of metadata for the event.If needed, provide additional documentation or comments to explain the usage of these constructs.
190-193
: Validate the necessity ofsignedGetUrl
inResGetMeta
.In the
ResGetMeta
interface, consider whether thesignedGetUrl
property should be included withinGetMetaParam
or directly inResGetMeta
. Ensure that clients receive all the necessary information to retrieve the metadata, and that the structure aligns with the overall API design.src/cloud/msg-types-wal.ts (1)
31-34
: Consider documenting payload handling strategy.The commented-out payload properties in multiple interfaces suggest incomplete implementation or future plans for payload handling. Please clarify the strategy for payload serialization/deserialization, especially for binary data transfer over JSON.
Also applies to: 57-60, 70-72, 107-109
✅ Verification successful
Documentation of payload handling strategy is sufficient
The commented properties consistently document the intended payload handling strategy (base64 encoding for binary data in JSON) across all message type interfaces. This serves as clear guidance for future implementation.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check for payload handling patterns in the codebase rg -A 3 'payload.*Uint8Array.*JSON.*base64'Length of output: 1030
Script:
#!/bin/bash # Search for base64 encoding/decoding implementation rg -A 3 'base64' --type tsLength of output: 1340
src/cloud/pre-signed-url.ts (2)
16-26
: Consider safer credential handling.Sensitive AWS credentials are defined in Env. Confirm that your local Docker environment or deployment pipeline does not risk printing or exposing these secrets (e.g., in logs).
28-81
: Verify URL generation in local Docker environment.The URL generation logic needs verification in the local Docker setup to ensure it works with the new environment.
✅ Verification successful
URL generation works correctly in local Docker environment
The URL generation logic has been verified through extensive test coverage across multiple environments, including local Docker setup. The implementation properly handles environment variables, AWS signing, and different storage types.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check for URL generation patterns and potential issues rg -A 5 'storageUrl.*build\(\).*setParam' # Check for any hardcoded URLs that might conflict rg -A 2 'https?://[^"'\'']*storage'Length of output: 3132
Script:
#!/bin/bash # Check URL builder implementation ast-grep --pattern 'class $_ implements URLBuilder { $$$ }' # Check environment configuration handling rg -A 5 'PreSignedEnv|storageUrl' src/ # Look for test configurations fd -e test.ts -e spec.ts -X cat {}Length of output: 47044
src/cloud/meta-merger/meta-merger.ts (2)
67-71
: Add transaction support for schema creation.When creating schema in local Docker environment, ensure atomic operations using transactions.
89-101
:⚠️ Potential issueEnhance error handling in addMeta.
The current implementation logs errors but continues processing, which could lead to inconsistent state. Consider implementing proper error recovery or transaction rollback.
Apply this diff to improve error handling:
for (const meta of mm.metas) { + let retries = 3; try { await this.sql.metaByTenantLedger.ensure({ ...byConnection, metaCID: meta.cid, meta: meta, updateAt: now, }); } catch (e) { this.sthis.logger.Warn().Err(e).Str("metaCID", meta.cid).Msg("addMeta"); + if (--retries <= 0) { + throw new Error(`Failed to add meta after retries: ${e.message}`); + } + await new Promise(resolve => setTimeout(resolve, 1000)); } }Likely invalid or redundant comment.
src/cloud/node-hono-server.ts (3)
88-103
: LGTM!The implementation of
NodeHonoServer
is clean and correctly handles WebSocket upgrades and CORS headers.
21-24
:⚠️ Potential issueImprove initialization safety of server properties.
The use of non-null assertion operator (!) for required properties is risky. These properties should be properly initialized in the constructor or marked as potentially undefined.
Apply this diff to improve initialization safety:
- _upgradeWebSocket!: UpgradeWebSocket; - _injectWebSocket!: (t: unknown) => void; - _serve!: serveFn; - _server!: ServerType; + private _upgradeWebSocket?: UpgradeWebSocket; + private _injectWebSocket?: (t: unknown) => void; + private _serve?: serveFn; + private _server?: ServerType;Likely invalid or redundant comment.
72-79
:⚠️ Potential issueAdd error handling for server startup.
The server startup process lacks error handling and could leave the application in an inconsistent state if initialization fails.
Apply this diff to add proper error handling:
- async serve(app: Hono, port: number): Promise<void> { - await new Promise<void>((resolve) => { - this._server = this._serve({ fetch: app.fetch, port }, () => { - this._injectWebSocket(this._server); - resolve(); - }); - }); + async serve(app: Hono, port: number): Promise<void> { + try { + await new Promise<void>((resolve, reject) => { + try { + this._server = this._serve({ fetch: app.fetch, port }, () => resolve()); + } catch (err) { + reject(new Error(`Failed to start server: ${err}`)); + } + }); + if (!this._server) { + throw new Error('Server failed to initialize'); + } + this._injectWebSocket(this._server); + } catch (err) { + await this.close().catch(() => {}); // Cleanup on failure + throw err; + }Likely invalid or redundant comment.
src/cloud/meta-merger/meta-merger.test.ts (2)
8-10
: LGTM!The sorting implementation is correct and efficient.
29-151
: LGTM!The test suite is comprehensive and well-structured, covering essential test cases for the MetaMerger functionality.
src/cloud/msg-dispatch.ts (3)
44-54
: LGTM!The WebSocket pair management is correctly implemented with proper validation.
96-171
: LGTM!The message dispatching logic is well-implemented with proper connection validation and comprehensive message type handling.
67-93
: Enhance DDoS protection implementation.The current DDoS protection is limited to basic connection tracking. Consider enhancing it with:
- Request rate limiting
- Connection pooling
- Resource usage monitoring
- IP-based restrictions
Run this script to check for existing rate limiting implementations:
src/cloud/ws-connection.ts (2)
143-151
: 🛠️ Refactor suggestionAdd timeout handling to request method.
The request method should handle timeouts to prevent hanging requests.
Apply this diff to add timeout handling:
async request<Q extends MsgBase, S extends MsgBase>(req: Q, opts: RequestOpts): Promise<MsgWithError<S>> { if (!this.opened) { return buildErrorMsg(this.sthis, this.logger, req, this.logger.Error().Msg("Connection not open").AsError()); } const future = new Future<S>(); + const timer = setTimeout(() => { + this.waitForTid.delete(req.tid); + future.resolve(buildErrorMsg(this.sthis, this.logger, req, this.logger.Error().Msg("Request timeout").AsError())); + }, opts.timeout || this.msgP.timeout); this.waitForTid.set(req.tid, { tid: req.tid, future, waitFor: opts.waitFor, timeout: opts.timeout }); await this.sendMsg(req); - return future.asPromise(); + return future.asPromise().finally(() => clearTimeout(timer)); }Likely invalid or redundant comment.
36-87
:⚠️ Potential issueFix potential race conditions in timeout handling.
The timeout timer should be cleared in all event handlers to prevent the timeout callback from being called after the connection is already established or failed.
Apply this diff to fix the race conditions:
async start(): Promise<Result<void>> { const onOpenFuture: Future<Result<unknown>> = new Future<Result<unknown>>(); const timer = setTimeout(() => { const err = this.logger.Error().Dur("timeout", this.msgP.timeout).Msg("Timeout").AsError(); this.toMsg(buildErrorMsg(this.sthis, this.logger, {} as MsgBase, err)); onOpenFuture.resolve(Result.Err(err)); }, this.msgP.timeout); this.ws.onopen = () => { + clearTimeout(timer); onOpenFuture.resolve(Result.Ok(undefined)); this.opened = true; }; this.ws.onerror = (ierr) => { + clearTimeout(timer); const err = this.logger.Error().Err(ierr).Msg("WS Error").AsError(); onOpenFuture.resolve(Result.Err(err)); const res = this.buildErrorMsg(this.logger.Error(), {}, err); this.toMsg(res); }; this.ws.onclose = () => { + clearTimeout(timer); this.opened = false; this.close().catch((ierr) => { const err = this.logger.Error().Err(ierr).Msg("close error").AsError(); onOpenFuture.resolve(Result.Err(err)); this.toMsg(buildErrorMsg(this.sthis, this.logger, { tid: "internal" } as MsgBase, err)); }); };Likely invalid or redundant comment.
src/cloud/hono-server.ts (2)
71-77
: Review CORS configuration for production useThe current CORS configuration uses permissive settings (
*
) which may be suitable for development but could pose security risks in production.Let's check for environment-specific configurations:
#!/bin/bash # Search for environment-specific CORS configurations rg "Access-Control-Allow-Origin" --type ts
144-147
: 🛠️ Refactor suggestionImprove WebSocket connection cleanup
The onClose handler's cleanup could be improved to prevent memory leaks.
onClose: () => { - dp = undefined as unknown as MsgDispatcher; - // console.log('Connection closed') + if (dp) { + dp.dispose(); // Add dispose method to MsgDispatcher + dp = undefined as unknown as MsgDispatcher; + } + logger.Info().Msg("WebSocket connection closed and cleaned up"); },Likely invalid or redundant comment.
src/cloud/test-helper.ts (1)
173-211
: 🛠️ Refactor suggestionImprove server factory error handling and resource management
The CFHonoServerFactory needs better error handling and resource cleanup.
export function CFHonoServerFactory() { + const STARTUP_TIMEOUT = 30000; // 30 seconds return { name: "CFHonoServer", factory: async (_sthis: SuperThis, _msgP: MsgerParams, remoteGestalt: Gestalt, port: number) => { + let serverProcess: { pid?: number } = {}; if (process.env.FP_WRANGLER_PORT) { return new HonoServer(new CFHonoFactory()); } const { tomlFile } = await resolveToml(); $.verbose = !!process.env.FP_DEBUG; + + const startupTimeout = new Promise((_, reject) => + setTimeout(() => reject(new Error('Server startup timeout')), STARTUP_TIMEOUT) + );Likely invalid or redundant comment.
src/cloud/backend/server.ts (1)
382-391
: Verify Hono server configuration for Docker environment.The server setup looks correct but verify:
- The Hono server registration works in Docker
- The app.fetch handler properly processes requests
Run this script to check the Hono server implementation:
✅ Verification successful
Implementation correctly supports both Docker and Cloudflare environments
The Hono server implementation is properly structured with environment-specific factories:
- NodeHonoFactory provides complete Docker support with proper WebSocket handling
- CFHonoFactory handles Cloudflare-specific requirements
- Registration pattern is consistent across both environments
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check Hono server implementation echo "=== Checking Hono server files ===" rg "class.*HonoServer" src/ echo "=== Checking server registration ===" rg "register.*app" src/Length of output: 1067
Script:
#!/bin/bash # Check base implementation and factory usage echo "=== Checking HonoServerBase implementation ===" ast-grep --pattern 'abstract class HonoServerBase { $$$ }' echo "=== Checking factory implementations ===" ast-grep --pattern 'class $_ implements HonoServerFactory { $$$ }'Length of output: 10012
src/cloud/msger.ts (2)
163-181
: 🛠️ Refactor suggestionAdd Docker-specific connection handling.
For reliable Docker container connections, consider these enhancements:
- Implement connection retries with exponential backoff
- Add container readiness checks
static async openWS( sthis: SuperThis, url: URI, msgP: MsgerParamsWithEnDe, exGestalt: ExchangedGestalt ): Promise<Result<MsgRawConnection>> { + const maxRetries = process.env.NODE_ENV === 'test' ? 5 : 0; + const baseDelay = 1000; + + const connectWithRetry = async (attempt = 0): Promise<WebSocket> => { + try { + let ws: WebSocket; + if (runtimeFn().isNodeIsh) { + const { WebSocket } = await import("ws"); + ws = new WebSocket(url.toString()) as unknown as WebSocket; + } else { + ws = new WebSocket(url.toString()); + } + return ws; + } catch (error) { + if (attempt < maxRetries) { + const delay = baseDelay * Math.pow(2, attempt); + await new Promise(resolve => setTimeout(resolve, delay)); + return connectWithRetry(attempt + 1); + } + throw error; + } + }; try { - let ws: WebSocket; - url = url.build().URI(); - if (runtimeFn().isNodeIsh) { - const { WebSocket } = await import("ws"); - ws = new WebSocket(url.toString()) as unknown as WebSocket; - } else { - ws = new WebSocket(url.toString()); - } + url = url.build().URI(); + const ws = await connectWithRetry(); return Result.Ok(new WSConnection(sthis, ws, msgP, exGestalt)); + } catch (error) { + return Result.Err(error as Error); } }Likely invalid or redundant comment.
35-45
: 🛠️ Refactor suggestionConsider increasing the default timeout for Docker environments.
The timeout implementation is solid, but when connecting to local Docker containers that are starting up, the default 3000ms timeout (set in line 82) might be too aggressive.
export function defaultMsgParams(sthis: SuperThis, igs: Partial<MsgerParamsWithEnDe>): MsgerParamsWithEnDe { return { mime: "application/json", ende: jsonEnDe(sthis), - timeout: 3000, + timeout: process.env.NODE_ENV === 'test' ? 10000 : 3000, protocolCapabilities: ["reqRes", "stream"], ...igs, } satisfies MsgerParamsWithEnDe; }Likely invalid or redundant comment.
buildErrorMsg(logger: Logger, msg: Partial<MsgBase>, err: Error): ErrorMsg { | ||
// const logLine = this.sthis.logger.Error().Err(err).Any("msg", msg); | ||
const rmsg = Array.from(this.onErrorFns.values()).reduce((msg, fn) => { | ||
return fn(msg, err); | ||
}, msg); | ||
const emsg = buildErrorMsg(this.sthis, logger, rmsg, err); | ||
logger.Error().Err(err).Any("msg", rmsg).Msg("connection error"); | ||
return emsg; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle potential errors in buildErrorMsg
method correctly.
In the buildErrorMsg
method, ensure that all transformations and error handling mechanisms are robust. Consider the following:
- Verify that reducing over
onErrorFns.values()
does not introduce unexpected behavior if the map is empty. - Ensure that partial messages passed in are handled gracefully.
- Confirm that logging the error message does not expose sensitive information.
Consider adding error checks or default values to handle these scenarios.
export function MsgIsResDelWAL(msg: MsgBase): msg is ResDelWAL { | ||
return msg.type === "resDelWAL" && MsgIsTenantLedger(msg); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent type guard implementation.
MsgIsResDelWAL
includes an additional MsgIsTenantLedger
check that's not present in other similar type guards. This inconsistency could lead to runtime issues.
Apply this diff to maintain consistency:
- return msg.type === "resDelWAL" && MsgIsTenantLedger(msg);
+ return msg.type === "resDelWAL";
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
export function MsgIsResDelWAL(msg: MsgBase): msg is ResDelWAL { | |
return msg.type === "resDelWAL" && MsgIsTenantLedger(msg); | |
} | |
export function MsgIsResDelWAL(msg: MsgBase): msg is ResDelWAL { | |
return msg.type === "resDelWAL"; | |
} |
process.env.FP_STORAGE_URL = BuildURI.from("fireproof://localhost:1968") | ||
// .setParam("testMode", "true") | ||
// .setParam("getBaseUrl", "https://storage.fireproof.direct/") | ||
.setParam("protocol", "ws") | ||
.toString(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Make storage URL configurable for different environments.
The hardcoded localhost URL might cause issues when deploying to different environments. Consider making it configurable through environment variables.
Apply this diff to make the URL configurable:
-process.env.FP_STORAGE_URL = BuildURI.from("fireproof://localhost:1968")
+const storageHost = process.env.STORAGE_HOST || "localhost:1968";
+process.env.FP_STORAGE_URL = BuildURI.from(`fireproof://${storageHost}`)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
process.env.FP_STORAGE_URL = BuildURI.from("fireproof://localhost:1968") | |
// .setParam("testMode", "true") | |
// .setParam("getBaseUrl", "https://storage.fireproof.direct/") | |
.setParam("protocol", "ws") | |
.toString(); | |
const storageHost = process.env.STORAGE_HOST || "localhost:1968"; | |
process.env.FP_STORAGE_URL = BuildURI.from(`fireproof://${storageHost}`) | |
// .setParam("testMode", "true") | |
// .setParam("getBaseUrl", "https://storage.fireproof.direct/") | |
.setParam("protocol", "ws") | |
.toString(); |
function toCRDTEntries(rows: MetaConnection[]) { | ||
return rows.reduce((r, i) => [...r, ...i.metas], [] as CRDTEntry[]); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Optimize array concatenation performance.
Using spread operator in reduce has O(n²) time complexity. Use Array.prototype.push for better performance.
Apply this diff to optimize the implementation:
-function toCRDTEntries(rows: MetaConnection[]) {
- return rows.reduce((r, i) => [...r, ...i.metas], [] as CRDTEntry[]);
+function toCRDTEntries(rows: MetaConnection[]) {
+ const result: CRDTEntry[] = [];
+ for (const row of rows) {
+ result.push(...row.metas);
+ }
+ return result;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
function toCRDTEntries(rows: MetaConnection[]) { | |
return rows.reduce((r, i) => [...r, ...i.metas], [] as CRDTEntry[]); | |
} | |
function toCRDTEntries(rows: MetaConnection[]) { | |
const result: CRDTEntry[] = []; | |
for (const row of rows) { | |
result.push(...row.metas); | |
} | |
return result; | |
} |
🧰 Tools
🪛 Biome (1.9.4)
[error] 18-18: Avoid the use of spread (...
) syntax on accumulators.
Spread syntax should be avoided on accumulators (like those in .reduce
) because it causes a time complexity of O(n^2)
.
Consider methods such as .splice or .push instead.
(lint/performance/noAccumulatingSpread)
// server.onopen = (ev) => { | ||
// console.log("onopen", ev); | ||
// wsEvents.onOpen?.(ev, wsCtx); | ||
// } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Implement proper WebSocket event handling
The onopen event handler is commented out and error handling could be improved.
- // server.onopen = (ev) => {
- // console.log("onopen", ev);
- // wsEvents.onOpen?.(ev, wsCtx);
- // }
+ server.onopen = (ev) => {
+ logger.Info().Msg("WebSocket connection opened");
+ wsEvents.onOpen?.(ev, wsCtx);
+ };
server.onerror = (err) => {
- // console.log("onerror", err);
+ logger.Error().Err(err).Msg("WebSocket error occurred");
wsEvents.onError?.(err, wsCtx);
};
Also applies to: 127-130
src/cloud/backend/cf-hono-server.ts
Outdated
server.onclose = (ev) => { | ||
// console.log("onclose", ev); | ||
wsEvents.onClose?.(ev, wsCtx); | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add cleanup and proper error handling in WebSocket lifecycle
The close and message handlers need proper cleanup and error handling.
server.onclose = (ev) => {
- // console.log("onclose", ev);
+ logger.Info().Msg("WebSocket connection closed");
+ // Clean up any resources
+ wsEvents.onClose?.(ev, wsCtx);
+ wsCtx = null;
+ };
server.onmessage = (evt) => {
- // console.log("onmessage", evt);
- // wsCtx.send("Hellox from server");
- wsEvents.onMessage?.(evt, wsCtx);
+ try {
+ wsEvents.onMessage?.(evt, wsCtx);
+ } catch (error) {
+ logger.Error().Err(error).Msg("Error processing WebSocket message");
+ wsEvents.onError?.(error, wsCtx);
+ }
};
Also applies to: 135-139
async serve<T>(_app: Hono, _port?: number): Promise<T> { | ||
return {} as T; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implement proper type handling in serve method
The serve method returns an empty object cast to type T, which could lead to runtime errors.
- async serve<T>(_app: Hono, _port?: number): Promise<T> {
- return {} as T;
+ async serve<T>(_app: Hono, _port?: number): Promise<T> {
+ throw new Error('serve() method must be implemented with proper return type');
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
async serve<T>(_app: Hono, _port?: number): Promise<T> { | |
return {} as T; | |
} | |
async serve<T>(_app: Hono, _port?: number): Promise<T> { | |
throw new Error('serve() method must be implemented with proper return type'); | |
} |
// const env = c.env as Env; | ||
// const id = env.FP_META_GROUPS.idFromName([conn.key.tenant, conn.key.ledger].join(":")); | ||
// const dObj = env.FP_META_GROUPS.get(id); | ||
// c.env.WS_EVENTS = createEvents(c); | ||
// return dObj.fetch(c.req.raw as unknown as CFRequestInfo) as unknown as Promise<Response>; | ||
// this._upgradeWebSocket!(createEvents)(c, next); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Remove commented durable object code
The commented code is outdated and has been replaced with a new WebSocket implementation. The old durable object approach is no longer used in the active codebase.
🔗 Analysis chain
Remove or implement commented durable object code
There are several commented sections related to durable objects that should either be implemented or removed.
Let's check if durable objects are used elsewhere in the codebase:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for durable object usage
rg "durable.*object" --type ts
Length of output: 32
Script:
#!/bin/bash
# Search for FP_META_GROUPS and related components
rg "FP_META_GROUPS" --type ts
rg "createEvents" --type ts
rg "_upgradeWebSocket" --type ts
# Search for any environment bindings definition
ast-grep --pattern 'interface Env {
$$$
}'
Length of output: 4124
src/cloud/backend/server.ts
Outdated
// case "/fp": { | ||
// const rq = await CFMsgProcessor.dispatch(() => req.json(), { env: this.env }); | ||
// return json(rq.res, MsgIsQSError(rq) ? 422 : 200); | ||
// } | ||
// case "/ws": { | ||
// const upgradeHeader = req.headers.get("Upgrade"); | ||
// if (!upgradeHeader || upgradeHeader !== "websocket") { | ||
// return new Response("Durable Object expected Upgrade: websocket", { status: 426 }); | ||
// } | ||
// const { 0: fromClient, 1: toClient } = new WebSocketPair(); | ||
// this.ctx.acceptWebSocket(toClient); | ||
// // this.logger.Debug().Msg("fetch"); | ||
// return new Response(null, { | ||
// status: 101, | ||
// webSocket: fromClient, | ||
// }); | ||
// } | ||
// default: { | ||
// const logger = ensureLogger(this.env); | ||
// return json(buildErrorMsg(logger, { tid: "internal" }, new NotFoundError(`Notfound:${path}`)), 404); | ||
// } | ||
// } | ||
// return new Response("Not implemented", { status: 501 }); | ||
} | ||
|
||
webSocketClose(ws: WebSocket, code: number, reason: string): void | Promise<void> { | ||
try { | ||
// if (typeof this.env.WS_EVENTS.onClose === "function") { | ||
this.env.WS_EVENTS.onClose?.({ code, reason } as CloseEvent, {} as WSContext); | ||
// } | ||
ws.close(code, reason); | ||
// eslint-disable-next-line @typescript-eslint/no-unused-vars | ||
} catch (e) { | ||
// ignore | ||
} finally { | ||
this.logger.Debug().Str("code", code.toString()).Str("reason", reason).Msg("webSocketClose"); | ||
} | ||
} | ||
|
||
// updateMeta(up: UpdateMetaEvent, ctx: CtxWithGroup): void { | ||
// const wsSocks = ctx.dobj.getWebSockets(); | ||
// const groupWs = wsSocks.map((ws) => ({ | ||
// ws, | ||
// group: ws.deserializeAttachment() as FPMetaGroup, | ||
// })); | ||
// const joinedMeta = groupWs.reduce((acc, { group }) => { | ||
// if (group && group.lastMeta) { | ||
// acc.push(...group.lastMeta.metas); | ||
// } | ||
// return acc | ||
// }, [] as CRDTEntry[]) | ||
// const now = new Date() | ||
// const joinedQS = { | ||
// req: { ...qs.req, metas: joinedMeta }, | ||
// res: qs.res, | ||
// } | ||
// return acc; | ||
// }, [] as CRDTEntry[]); | ||
// if (joinedMeta.length === 0) { | ||
// return []; | ||
// } | ||
// const now = new Date(); | ||
// const joinedUp = { | ||
// ...up, | ||
// metas: joinedMeta, | ||
// }; | ||
// groupWs.forEach(({ ws, group }) => { | ||
// // console.log("group->", group); | ||
// // group = { | ||
// // ...group, | ||
// // msgSeq: (group ? group.msgSeq : 0) + 1, | ||
// // } | ||
// group = { | ||
// ...group, | ||
// qs: { | ||
// ...group.qs, | ||
// s: { | ||
// ...group.qs.s, | ||
// [up.type]: { | ||
// msgSeq: group.qs?.s?.[up.type]?.msgSeq + 1, | ||
// }, | ||
// }, | ||
// }, | ||
// lastUsed: now, | ||
// } satisfies FPMetaGroup; | ||
// ws.serializeAttachment(group); | ||
// const toSend = { | ||
// ...joinedUp, | ||
// subscriberId: group.group.subscriberId, | ||
// connId: group.group.connId, | ||
// }; | ||
// this.logger.Debug().Any("event", toSend).Msg("updateMeta"); | ||
// ws.send( | ||
// // this is not the best way to do this | ||
// JSON.stringify(toSend) | ||
// ); | ||
// }); | ||
// return joinedMeta; | ||
// } | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor FPMetaGroups for Docker environment.
The class needs significant changes for Docker transition:
- Remove DurableObject dependency
- Complete the WebSocket message handling implementation
- Implement proper error handling in webSocketClose
Consider this structure for the Docker implementation:
-export class FPMetaGroups extends DurableObject<Env> {
+export class FPMetaGroups {
+ private readonly env: Env;
+ private readonly ctx: any; // Define appropriate context type
+
+ constructor(env: Env) {
+ this.env = env;
+ this.logger = ensureLogger(env, "FPMetaGroups");
+ }
Committable suggestion skipped: line range outside the PR's diff.
async close(uri: URI): Promise<bs.VoidResult> { | ||
const uriStr = uri.toString(); | ||
// CAUTION here is my happen a mutation of subscriptions caused by unsub | ||
for (const sub of Array.from(subscriptions.values())) { | ||
for (const s of sub) { | ||
if (s.uri.toString() === uriStr) { | ||
s.unsub(); | ||
} | ||
} | ||
} | ||
const rConn = await this.getCloudConnection(uri); | ||
if (rConn.isErr()) { | ||
return this.logger.Error().Err(rConn).Msg("Error in getCloudConnection").ResultError(); | ||
} | ||
const conn = rConn.Ok(); | ||
await conn.close(); | ||
return Result.Ok(undefined); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Handle potential race conditions in close method.
The close method modifies the subscriptions collection while iterating:
- for (const sub of Array.from(subscriptions.values())) {
+ const currentSubs = Array.from(subscriptions.values());
+ const closeTasks = [];
+ for (const sub of currentSubs) {
for (const s of sub) {
if (s.uri.toString() === uriStr) {
- s.unsub();
+ closeTasks.push(s.unsub());
}
}
}
+ await Promise.all(closeTasks);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
async close(uri: URI): Promise<bs.VoidResult> { | |
const uriStr = uri.toString(); | |
// CAUTION here is my happen a mutation of subscriptions caused by unsub | |
for (const sub of Array.from(subscriptions.values())) { | |
for (const s of sub) { | |
if (s.uri.toString() === uriStr) { | |
s.unsub(); | |
} | |
} | |
} | |
const rConn = await this.getCloudConnection(uri); | |
if (rConn.isErr()) { | |
return this.logger.Error().Err(rConn).Msg("Error in getCloudConnection").ResultError(); | |
} | |
const conn = rConn.Ok(); | |
await conn.close(); | |
return Result.Ok(undefined); | |
} | |
async close(uri: URI): Promise<bs.VoidResult> { | |
const uriStr = uri.toString(); | |
// CAUTION here is my happen a mutation of subscriptions caused by unsub | |
const currentSubs = Array.from(subscriptions.values()); | |
const closeTasks = []; | |
for (const sub of currentSubs) { | |
for (const s of sub) { | |
if (s.uri.toString() === uriStr) { | |
closeTasks.push(s.unsub()); | |
} | |
} | |
} | |
await Promise.all(closeTasks); | |
const rConn = await this.getCloudConnection(uri); | |
if (rConn.isErr()) { | |
return this.logger.Error().Err(rConn).Msg("Error in getCloudConnection").ResultError(); | |
} | |
const conn = rConn.Ok(); | |
await conn.close(); | |
return Result.Ok(undefined); | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
src/cloud/connection.test.ts (1)
242-273
: 🛠️ Refactor suggestionAdd WAL-specific test scenarios.
Current WAL tests only verify URL signing. Consider adding tests for:
- WAL entry ordering
- Conflict resolution
- Recovery scenarios
Would you like me to provide example test cases for these WAL-specific scenarios?
🧹 Nitpick comments (3)
src/cloud/connection.test.ts (3)
64-64
: Improve port selection logic.The current port selection might conflict with other services and doesn't handle port-in-use scenarios. Consider using a port-finder utility.
- const port = +(process.env.FP_WRANGLER_PORT || 0) || 1024 + Math.floor(Math.random() * (65536 - 1024)); + // Using get-port or similar utility + const getPort = require('get-port'); + const port = await getPort({ + port: process.env.FP_WRANGLER_PORT ? Number(process.env.FP_WRANGLER_PORT) : undefined, + // Avoid well-known service ports + port: getPort.makeRange(3000, 65535) + });
115-119
: Improve error handling and messages in tests.The error handling in tests could be enhanced:
- Add more descriptive error messages including actual vs expected values
- Test additional error scenarios (e.g., malformed responses)
Example improvement:
- assert.fail("expected MsgResGetData", JSON.stringify(res)); + assert.fail( + `Expected MsgResGetData but got ${res.type}\n` + + `Actual response: ${JSON.stringify(res, null, 2)}` + );Also applies to: 135-137, 144-146, 193-195, 214-215, 224-225, 234-235, 250-251, 260-261, 270-271
200-205
: Extract test helpers and constants.The
sup()
function and other test setup could be improved:
- Move to a separate test helpers file
- Use constants for magic values
- Add type safety for test data
+// test-constants.ts +export const TEST_CONSTANTS = { + PATH: 'test/me', + KEY: 'key-test', + TENANT: 'Tenant', + LEDGER: 'Ledger' +} as const; + +// test-helpers.ts +export function createTestUrlParams(): ReqSignedUrlParam { + return { + path: TEST_CONSTANTS.PATH, + key: TEST_CONSTANTS.KEY, + }; +} + +// In test file: - function sup() { - return { - path: "test/me", - key: "key-test", - } satisfies ReqSignedUrlParam; - } + const sup = createTestUrlParams;
src/cloud/connection.test.ts
Outdated
// describe("Meta", async () => { | ||
// // const res = await conn.request(buildReqGetMeta(), { waitFor: MsgIsResGetMeta }); | ||
// // expect(MsgIsError(res)).toBeTruthy(); | ||
// }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Implement missing test scenarios.
Several critical test scenarios are missing:
- Uncomment and implement metadata tests
- Add tests for storage backend errors
- Add tests for concurrent connections
- Add tests for connection recovery
Would you like me to provide example implementations for these test scenarios?
src/cloud/connection.test.ts
Outdated
async function refURL(sp: ResSignedUrl) { | ||
const { env } = await resolveToml(); | ||
return ( | ||
await calculatePreSignedUrl(sp, { | ||
storageUrl: URI.from(env.STORAGE_URL), | ||
aws: { | ||
accessKeyId: env.ACCESS_KEY_ID, | ||
secretAccessKey: env.SECRET_ACCESS_KEY, | ||
region: env.REGION, | ||
}, | ||
test: { | ||
amzDate: URI.from(sp.signedUrl).getParam("X-Amz-Date"), | ||
}, | ||
}) | ||
) | ||
.Ok() | ||
.asObj(); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling and input validation to refURL function.
The URL signing helper function needs additional safeguards:
- Validate environment variables before use
- Add error handling for missing X-Amz-Date parameter
- Validate input ResSignedUrl parameter
async function refURL(sp: ResSignedUrl) {
const { env } = await resolveToml();
+ // Validate required environment variables
+ const requiredEnvVars = ['STORAGE_URL', 'ACCESS_KEY_ID', 'SECRET_ACCESS_KEY', 'REGION'];
+ for (const envVar of requiredEnvVars) {
+ if (!env[envVar]) {
+ throw new Error(`Missing required environment variable: ${envVar}`);
+ }
+ }
+
+ // Validate input
+ if (!sp?.signedUrl) {
+ throw new Error('Invalid ResSignedUrl: missing signedUrl');
+ }
+
+ const amzDate = URI.from(sp.signedUrl).getParam("X-Amz-Date");
+ if (!amzDate) {
+ throw new Error('Missing X-Amz-Date parameter in signed URL');
+ }
return (
await calculatePreSignedUrl(sp, {
storageUrl: URI.from(env.STORAGE_URL),
aws: {
accessKeyId: env.ACCESS_KEY_ID,
secretAccessKey: env.SECRET_ACCESS_KEY,
region: env.REGION,
},
test: {
- amzDate: URI.from(sp.signedUrl).getParam("X-Amz-Date"),
+ amzDate,
},
})
)
.Ok()
.asObj();
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
async function refURL(sp: ResSignedUrl) { | |
const { env } = await resolveToml(); | |
return ( | |
await calculatePreSignedUrl(sp, { | |
storageUrl: URI.from(env.STORAGE_URL), | |
aws: { | |
accessKeyId: env.ACCESS_KEY_ID, | |
secretAccessKey: env.SECRET_ACCESS_KEY, | |
region: env.REGION, | |
}, | |
test: { | |
amzDate: URI.from(sp.signedUrl).getParam("X-Amz-Date"), | |
}, | |
}) | |
) | |
.Ok() | |
.asObj(); | |
} | |
async function refURL(sp: ResSignedUrl) { | |
const { env } = await resolveToml(); | |
// Validate required environment variables | |
const requiredEnvVars = ['STORAGE_URL', 'ACCESS_KEY_ID', 'SECRET_ACCESS_KEY', 'REGION']; | |
for (const envVar of requiredEnvVars) { | |
if (!env[envVar]) { | |
throw new Error(`Missing required environment variable: ${envVar}`); | |
} | |
} | |
// Validate input | |
if (!sp?.signedUrl) { | |
throw new Error('Invalid ResSignedUrl: missing signedUrl'); | |
} | |
const amzDate = URI.from(sp.signedUrl).getParam("X-Amz-Date"); | |
if (!amzDate) { | |
throw new Error('Missing X-Amz-Date parameter in signed URL'); | |
} | |
return ( | |
await calculatePreSignedUrl(sp, { | |
storageUrl: URI.from(env.STORAGE_URL), | |
aws: { | |
accessKeyId: env.ACCESS_KEY_ID, | |
secretAccessKey: env.SECRET_ACCESS_KEY, | |
region: env.REGION, | |
}, | |
test: { | |
amzDate, | |
}, | |
}) | |
) | |
.Ok() | |
.asObj(); | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
♻️ Duplicate comments (4)
src/cloud/meta-merger/meta-merger.ts (1)
58-62
: 🛠️ Refactor suggestionAdd transaction support for schema creation.
When creating schema in local Docker environment, ensure atomic operations using transactions.
Apply this diff to wrap schema creation in a transaction:
async createSchema(drop = false) { + await this.db.exec('BEGIN TRANSACTION'); + try { for (const i of this.sql.metaSend.sqlCreateMetaSend(drop)) { await i.run(); } + await this.db.exec('COMMIT'); + } catch (error) { + await this.db.exec('ROLLBACK'); + throw new Error(`Schema creation failed: ${error.message}`); + } }src/cloud/meta-merger/meta-merger.test.ts (1)
17-19
: 🛠️ Refactor suggestionOptimize array concatenation performance.
Using spread operator in reduce has O(n²) time complexity. Use Array.prototype.push for better performance.
Apply this diff to optimize the implementation:
-function toCRDTEntries(rows: MetaConnection[]) { - return rows.reduce((r, i) => [...r, ...i.metas], [] as CRDTEntry[]); +function toCRDTEntries(rows: MetaConnection[]) { + const result: CRDTEntry[] = []; + for (const row of rows) { + result.push(...row.metas); + } + return result; +}🧰 Tools
🪛 Biome (1.9.4)
[error] 18-18: Avoid the use of spread (
...
) syntax on accumulators.Spread syntax should be avoided on accumulators (like those in
.reduce
) because it causes a time complexity ofO(n^2)
.
Consider methods such as .splice or .push instead.(lint/performance/noAccumulatingSpread)
src/cloud/msg-dispatch.ts (1)
41-67
:⚠️ Potential issueEnhance DDoS protection mechanisms
The current
ConnectionManager
implementation provides basic connection tracking and removes old connections based on a time threshold. However, it lacks comprehensive DDoS protection features. Consider enhancing the implementation with:
- Request rate limiting to prevent excessive requests from a single source.
- Connection pooling to efficiently manage concurrent connections.
- Resource usage monitoring and limits to prevent resource exhaustion.
- Timeout mechanisms for inactive or idle connections to release resources promptly.
src/cloud/msger.ts (1)
32-34
:⚠️ Potential issueAdd input validation to selectRandom.
The function could throw if the input array is empty, which is critical for Docker endpoint selection.
Apply this diff to add validation:
export function selectRandom<T>(arr: T[]): T { + if (!arr || arr.length === 0) { + throw new Error('Cannot select from empty array'); + } return arr[Math.floor(Math.random() * arr.length)]; }
🧹 Nitpick comments (17)
src/cloud/pre-signed-url.ts (3)
9-14
: Remove commented out interface.The
PreSignedConnMsg
interface is commented out and appears to be unused. If it's no longer needed, consider removing it to maintain code cleanliness.
28-32
: Clean up commented code blocks.There are multiple blocks of commented out code:
- Connection validation (lines 28-32)
- File suffix logic (lines 40-48)
If these are no longer needed, remove them. If they represent future functionality, document them with TODO comments.
Also applies to: 40-48
50-59
: Consider adding URL validation.The URL construction looks solid, but consider adding validation for:
- Maximum URL length constraints
- Valid characters in tenant/ledger/store names
- Proper URL encoding of parameters
src/cloud/meta-merger/meta-merger.ts (3)
37-37
: Remove commented code.Remove the commented lines about "sthis" as they appear to be unused.
- // readonly sthis: SuperThis; readonly sql: { readonly tenant: TenantSql; readonly tenantLedger: TenantLedgerSql; readonly metaByTenantLedger: MetaByTenantLedgerSql; readonly metaSend: MetaSendSql; }; constructor(db: SQLDatabase) { this.db = db; - // this.sthis = sthis; const tenant = new TenantSql(db);Also applies to: 47-47
70-74
: Improve empty metaCIDs handling.Using a date string as a placeholder when metaCIDs is empty is a code smell. Consider using a more explicit approach or handling empty cases differently.
- metaCIDs: metaCIDs.length ? metaCIDs : [new Date().toISOString()], + metaCIDs: metaCIDs.length ? metaCIDs : ['__DELETE_ALL__'],
103-115
: Add error handling to metaToSend.The method should handle potential database operation failures, especially since it's used in a testing context where database errors need to be clearly identified.
async metaToSend(sink: Connection, now = new Date()): Promise<CRDTEntry[]> { + try { const bySink = toByConnection(sink); const rows = await this.sql.metaSend.selectToAddSend({ ...bySink, now }); await this.sql.metaSend.insert( rows.map((row) => ({ metaCID: row.metaCID, reqId: row.reqId, resId: row.resId, sendAt: row.sendAt, })) ); return rows.map((row) => row.meta); + } catch (error) { + throw new Error(`Failed to process metadata: ${error.message}`); + } }src/cloud/meta-merger/meta-merger.test.ts (1)
34-46
: Add cleanup in afterAll hook.The test suite creates a schema but doesn't clean it up after tests complete. Consider adding an afterAll hook to drop the schema.
beforeAll(async () => { if (runtimeFn().isCFWorker) { const { CFWorkerSQLDatabase } = await import("./cf-worker-abstract-sql.js"); const { env } = await import("cloudflare:test"); db = new CFWorkerSQLDatabase((env as Env).DB); } else { const { BetterSQLDatabase } = await import("./bettersql-abstract-sql.js"); db = new BetterSQLDatabase("./dist/test.db"); } mm = new MetaMerger(db); await mm.createSchema(); }); +afterAll(async () => { + await mm.createSchema(true); // Drop schema +});src/cloud/msg-type-meta.ts (1)
25-39
: Consider refactoring 'build' functions to reduce code duplicationThe
buildReqPutMeta
,buildBindGetMeta
, andbuildReqDelMeta
functions have similar structures for constructing message objects. Consider refactoring these to use a generic helper function to eliminate code duplication and improve maintainability.For example, you can create a generic helper function:
function buildReqMeta<T extends MsgBase>( sthis: NextId, type: T['type'], params: ReqSignedUrlParam, gwCtx: GwCtx, additionalFields?: Partial<T> ): T { return { tid: sthis.nextId().str, ...gwCtx, type, version: VERSION, params, ...additionalFields, } as T; }Then refactor the specific build functions:
-export function buildReqPutMeta( - sthis: NextId, - signedUrlParams: ReqSignedUrlParam, - metas: CRDTEntry[], - gwCtx: GwCtx -): ReqPutMeta { - return { - tid: sthis.nextId().str, - type: "reqPutMeta", - ...gwCtx, - version: VERSION, - params: signedUrlParams, - metas, - }; -} +export function buildReqPutMeta( + sthis: NextId, + signedUrlParams: ReqSignedUrlParam, + metas: CRDTEntry[], + gwCtx: GwCtx +): ReqPutMeta { + return buildReqMeta<ReqPutMeta>(sthis, "reqPutMeta", signedUrlParams, gwCtx, { metas }); +}Similarly, you can apply this refactoring to
buildBindGetMeta
andbuildReqDelMeta
.Also applies to: 85-93, 122-131
src/cloud/node-hono-server.ts (2)
81-86
: Enhance server shutdown with timeout and cleanup.The close method needs a timeout and proper cleanup of resources.
-async close(): Promise<void> { - this._server.close(() => { - /* */ - }); -} +async close(timeoutMs: number = 5000): Promise<void> { + if (!this._server) return; + + const timeout = new Promise((_, reject) => + setTimeout(() => reject(new Error('Server close timeout')), timeoutMs) + ); + + try { + await Promise.race([ + new Promise<void>((resolve) => { + this._server.close(() => resolve()); + }), + timeout + ]); + } finally { + this._server = undefined; + } +}
96-101
: Track and cleanup WebSocket connections.The WebSocket upgrade middleware should track active connections to prevent resource leaks.
+ private activeConnections = new Set<WebSocket>(); + override upgradeWebSocket(createEvents: (c: Context) => WSEvents | Promise<WSEvents>): ConnMiddleware { return async (_conn, c, next) => { - return this._upgradeWebSocket(createEvents)(c, next); + const result = await this._upgradeWebSocket(createEvents)(c, next); + if (result.response && result.socket) { + this.activeConnections.add(result.socket); + result.socket.addEventListener('close', () => { + this.activeConnections.delete(result.socket); + }); + } + return result; }; }src/cloud/backend/cf-hono-server.ts (1)
106-109
: Remove or implement commented durable object code.The commented code for durable objects should either be implemented or removed to maintain code clarity.
Remove these commented sections if they are no longer needed.
Also applies to: 123-128
src/cloud/msg-dispatcher-impl.ts (2)
49-54
: Unused parameters in handler functionIn the handler for
MsgIsReqGestalt
, the parameters_sthis
,_logger
, and_ctx
are unused. While prefixing unused parameters with an underscore indicates they are intentional, consider removing them for clarity and to prevent confusion.
72-124
: Refactor repetitive handler registrationThe
registerMsg
method is called multiple times with similar structures for different message types. This repetition can be reduced by refactoring the code to improve maintainability. Consider creating a loop or helper function to register handlers dynamically.Here's a suggested refactor:
- dp.registerMsg( - { - match: MsgIsReqGetData, - fn: (sthis, logger, ctx, msg: MsgWithConn<ReqGetData>) => { - return buildResGetData(sthis, logger, msg, ctx.impl); - }, - }, - { - match: MsgIsReqPutData, - fn: (sthis, logger, ctx, msg: MsgWithConn<ReqPutData>) => { - return buildResPutData(sthis, logger, msg, ctx.impl); - }, - }, - // ... other handlers ... - ); + const handlers = [ + { match: MsgIsReqGetData, fn: buildResGetData }, + { match: MsgIsReqPutData, fn: buildResPutData }, + { match: MsgIsReqDelData, fn: buildResDelData }, + // ... other handlers ... + ]; + handlers.forEach(({ match, fn }) => { + dp.registerMsg({ + match, + fn: (sthis, logger, ctx, msg) => { + return fn(sthis, logger, msg, ctx.impl); + }, + }); + });src/cloud/msger.ts (1)
36-46
: Enhance timeout handling for Docker environments.The timeout implementation should include connection context and support for Docker container startup delays.
Apply this diff to enhance the implementation:
export function timeout<T>(ms: number, promise: Promise<T>): Promise<T> { + const timeoutError = new Error(`Operation timeout after ${ms}ms`); + timeoutError.name = 'TimeoutError'; return new Promise((resolve, reject) => { const timer = setTimeout(() => { - reject(new Error(`TIMEOUT after ${ms}ms`)); + reject(timeoutError); }, ms); promise .then(resolve) .catch(reject) .finally(() => clearTimeout(timer)); }); }src/cloud/msg-types.ts (1)
461-483
: Enhance error handling for Docker environment.The error message builder should include Docker-specific context for better debugging.
Apply this diff to improve error context:
export function buildErrorMsg( sthis: SuperThis, logger: Logger, base: Partial<MsgBase & { ref?: unknown }>, error: Error, body?: string, stack?: string[] ): ErrorMsg { + const environment = sthis.env.get("FP_ENVIRONMENT") || "unknown"; + const isDocker = sthis.env.get("FP_DOCKER") === "true"; if (!stack && sthis.env.get("FP_STACK")) { stack = error.stack?.split("\n"); } const msg = { src: base, type: "error", tid: base.tid || "internal", message: error.message, version: VERSION, body, stack, + context: { + environment, + isDocker, + connectionType: base.type + } } satisfies ErrorMsg; logger.Any("ErrorMsg", msg); return msg; }src/cloud/connection.test.ts (2)
221-252
: Enhance Data test suite coverage.Consider:
- Uncomment and implement parameter assertions
- Add tests for error scenarios
- Add tests for invalid parameters
- Add tests for concurrent requests
254-322
: Enhance Meta test suite coverage.Consider:
- Uncomment and implement parameter assertions
- Add tests for metadata conflicts
- Add tests for concurrent metadata updates
- Add tests for metadata recovery scenarios
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (18)
src/cloud/backend/cf-hono-server.ts
(1 hunks)src/cloud/client/gateway.ts
(1 hunks)src/cloud/connection.test.ts
(1 hunks)src/cloud/hono-server.ts
(1 hunks)src/cloud/http-connection.ts
(1 hunks)src/cloud/meta-merger/cf-worker-abstract-sql.ts
(1 hunks)src/cloud/meta-merger/meta-merger.test.ts
(1 hunks)src/cloud/meta-merger/meta-merger.ts
(1 hunks)src/cloud/meta-merger/tenant-ledger.ts
(1 hunks)src/cloud/msg-dispatch.ts
(1 hunks)src/cloud/msg-dispatcher-impl.ts
(1 hunks)src/cloud/msg-type-meta.ts
(1 hunks)src/cloud/msg-types.ts
(1 hunks)src/cloud/msger.ts
(1 hunks)src/cloud/node-hono-server.ts
(1 hunks)src/cloud/pre-signed-url.ts
(1 hunks)src/cloud/test-helper.ts
(1 hunks)src/cloud/ws-connection.ts
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- src/cloud/client/gateway.ts
- src/cloud/meta-merger/cf-worker-abstract-sql.ts
- src/cloud/meta-merger/tenant-ledger.ts
🧰 Additional context used
🪛 Biome (1.9.4)
src/cloud/ws-connection.ts
[error] 179-179: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
src/cloud/meta-merger/meta-merger.test.ts
[error] 18-18: Avoid the use of spread (...
) syntax on accumulators.
Spread syntax should be avoided on accumulators (like those in .reduce
) because it causes a time complexity of O(n^2)
.
Consider methods such as .splice or .push instead.
(lint/performance/noAccumulatingSpread)
src/cloud/node-hono-server.ts
[error] 37-37: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
[error] 37-37: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
src/cloud/backend/cf-hono-server.ts
[error] 45-45: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
[error] 45-45: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
src/cloud/hono-server.ts
[error] 42-42: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
[error] 157-157: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
[error] 157-157: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
src/cloud/msger.ts
[error] 160-160: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
src/cloud/msg-types.ts
[error] 356-356: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Quality Checks
🔇 Additional comments (21)
src/cloud/pre-signed-url.ts (2)
15-25
: Consider safer credential handling.The AWS credentials are exposed in the interface. Ensure your local Docker environment and deployment pipeline handle these secrets securely.
33-38
: Verify store name construction.The store name construction with optional index parameter needs verification in the local Docker setup. Ensure this works correctly with the new testing framework.
src/cloud/meta-merger/meta-merger.ts (1)
90-99
: Enhance error handling in addMeta.The current implementation only logs warnings when metadata insertion fails. Consider whether these errors should be propagated to the caller, especially in a testing context.
✅ Verification successful
Current error handling in addMeta is appropriate.
The warning-only approach appears to be intentional based on the test cases and usage patterns. The method is often used in loops to process multiple items, where failing one item shouldn't prevent processing of others. The current implementation appropriately logs warnings for tracking issues while maintaining operational flow.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check if there are any error handling patterns in similar methods ast-grep --pattern 'try { $$$ } catch (e) { $$.logger.Warn() $$$ }'Length of output: 75
Script:
#!/bin/bash # Search for try-catch blocks in meta-merger files echo "=== Try-catch patterns in meta-merger ===" rg -A 5 "try\s*{" src/cloud/meta-merger/ # Look for test files echo -e "\n=== Related test files ===" fd "meta-merger.*test" src/ # Check for other error handling patterns echo -e "\n=== Other error handling patterns ===" ast-grep --pattern 'catch (e) { $$ }'Length of output: 485
Script:
#!/bin/bash # Check test file content echo "=== Test file content ===" cat src/cloud/meta-merger/meta-merger.test.ts # Search for try-catch with fixed pattern echo -e "\n=== Try-catch blocks ===" rg "try {" -A 5 src/cloud/meta-merger/meta-merger.ts # Look for error-related tests echo -e "\n=== Error-related tests ===" rg "error|fail|throw|catch" src/cloud/meta-merger/meta-merger.test.tsLength of output: 6513
src/cloud/meta-merger/meta-merger.test.ts (1)
70-218
: Well-structured and comprehensive test suite!The test cases provide good coverage of the MetaMerger functionality, with proper isolation between tests and clear descriptions. This will help ensure the reliability of the local Docker testing setup.
src/cloud/msg-type-meta.ts (2)
51-60
: Ensure 'meta' properties do not conflict with assigned propertiesIn
buildResPutMeta
, the...meta
object is spread before assigning properties liketid
,conn
,tenant
,type
, andversion
. Verify thatmeta
does not contain any of these properties, or that overwriting them is intentional, to prevent unexpected property values.
102-110
: Ensure 'metaParam' properties do not conflict with assigned propertiesIn
buildEventGetMeta
, the...metaParam
object is spread before assigning properties such astid
,type
,params
, andversion
. Verify thatmetaParam
does not contain these properties, or that overwriting them is intentional, to prevent unintended behavior.src/cloud/node-hono-server.ts (2)
22-27
: 🛠️ Refactor suggestionInitialize required properties in constructor.
The use of non-null assertions (!) for required properties is risky. Consider initializing these properties in the constructor or marking them as potentially undefined.
- _upgradeWebSocket!: UpgradeWebSocket; - _injectWebSocket!: (t: unknown) => void; - _serve!: serveFn; - _server!: ServerType; + private _upgradeWebSocket?: UpgradeWebSocket; + private _injectWebSocket?: (t: unknown) => void; + private _serve?: serveFn; + private _server?: ServerType;Likely invalid or redundant comment.
73-80
: 🛠️ Refactor suggestionImprove error handling in serve method.
The server startup process lacks error handling and could leave the application in an inconsistent state if initialization fails.
async serve(app: Hono, port: number): Promise<void> { - await new Promise<void>((resolve) => { - this._server = this._serve({ fetch: app.fetch, port }, () => { - this._injectWebSocket(this._server); - resolve(); - }); - }); + try { + await new Promise<void>((resolve, reject) => { + try { + this._server = this._serve({ fetch: app.fetch, port }, () => resolve()); + } catch (err) { + reject(new Error(`Failed to start server: ${err}`)); + } + }); + if (!this._server) { + throw new Error('Server failed to initialize'); + } + this._injectWebSocket(this._server); + } catch (err) { + await this.close().catch(() => {}); // Cleanup on failure + throw err; + } }Likely invalid or redundant comment.
src/cloud/backend/cf-hono-server.ts (1)
138-154
: 🛠️ Refactor suggestionImprove WebSocket event handling.
The WebSocket event handlers need proper error handling and cleanup:
- onopen event is commented out
- Error handling in onmessage could be improved
- onclose handler needs proper cleanup
-// server.onopen = (ev) => { -// console.log("onopen", ev); -// wsEvents.onOpen?.(ev, wsCtx); -// } +server.onopen = (ev) => { + this.logger.Info().Msg("WebSocket connection opened"); + wsEvents.onOpen?.(ev, wsCtx); +}; server.onerror = (err) => { - // console.log("onerror", err); + this.logger.Error().Err(err).Msg("WebSocket error occurred"); wsEvents.onError?.(err, wsCtx); }; server.onclose = (ev) => { - // console.log("onclose", ev); + this.logger.Info().Msg("WebSocket connection closed"); + wsEvents.onClose?.(ev, wsCtx); + wsCtx = null; // Cleanup }; server.onmessage = (evt) => { - // console.log("onmessage", evt); - wsEvents.onMessage?.(evt, wsCtx); + try { + wsEvents.onMessage?.(evt, wsCtx); + } catch (error) { + this.logger.Error().Err(error).Msg("Error processing WebSocket message"); + wsEvents.onError?.(error, wsCtx); + } };Likely invalid or redundant comment.
src/cloud/test-helper.ts (1)
179-215
: 🛠️ Refactor suggestionImprove process management in CFHonoServerFactory.
The current implementation has several issues:
- No timeout for server startup
- No proper error handling for stderr
- No cleanup of resources on failure
export function CFHonoServerFactory() { + const STARTUP_TIMEOUT = 30000; // 30 seconds + let serverProcess: { pid?: number } = {}; + return { name: "CFHonoServer", factory: async (_sthis: SuperThis, _msgP: MsgerParams, remoteGestalt: Gestalt, port: number) => { if (process.env.FP_WRANGLER_PORT) { return new HonoServer(new CFHonoFactory()); } const { tomlFile } = await resolveToml(); $.verbose = !!process.env.FP_DEBUG; + + const startupTimeout = new Promise((_, reject) => + setTimeout(() => reject(new Error('Server startup timeout')), STARTUP_TIMEOUT) + ); + const runningWrangler = $` wrangler dev -c ${tomlFile} --port ${port} --env test-${remoteGestalt.protocolCapabilities[0]} --no-show-interactive-dev-session & waitPid=$! echo "PID:$waitPid" wait $waitPid`; + const waitReady = new Future(); - let pid: number | undefined; runningWrangler.stdout.on("data", (chunk) => { const mightPid = chunk.toString().match(/PID:(\d+)/)?.[1]; if (mightPid) { - pid = +mightPid; + serverProcess.pid = +mightPid; } if (chunk.includes("Ready on http")) { waitReady.resolve(true); } }); + runningWrangler.stderr.on("data", (chunk) => { - console.error("!!", chunk.toString()); + const error = chunk.toString(); + console.error("Server error:", error); + if (!waitReady.isResolved()) { + waitReady.reject(new Error(`Server startup failed: ${error}`)); + } }); - await waitReady.asPromise(); + + try { + await Promise.race([waitReady.asPromise(), startupTimeout]); + } catch (error) { + if (serverProcess.pid) { + process.kill(serverProcess.pid); + } + throw error; + } + return new HonoServer( new CFHonoFactory(() => { - if (pid) process.kill(pid); + if (serverProcess.pid) { + process.kill(serverProcess.pid); + serverProcess.pid = undefined; + } }) ); }, }; }Likely invalid or redundant comment.
src/cloud/msg-dispatcher-impl.ts (1)
58-69
: Ensure uniqueness and thread-safety in connection managementIn the
MsgIsReqOpen
handler, a newresId
is generated, and a connection is added to theconnManager
. Ensure that:
- The
resId
is unique across all connections to prevent collisions.- The
connManager.addConn
method is thread-safe to handle concurrent connection requests without race conditions.src/cloud/http-connection.ts (1)
128-128
: Parameterize the HTTP method or confirm "PUT" is appropriateIn the
request
method, the HTTP method is hardcoded to"PUT"
. To make theHttpConnection
class more flexible and reusable for different types of requests, consider parameterizing the HTTP method or ensuring that"PUT"
is appropriate for all use cases.Apply this diff to parameterize the HTTP method:
headers: headers.AsHeaderInit(), - method: "PUT", + method: _opts.httpMethod || "PUT", body: rReqBody.Ok(),src/cloud/ws-connection.ts (2)
45-96
:⚠️ Potential issueEnhance connection handling for Docker environment.
The WebSocket connection implementation needs improvements for reliable Docker container connectivity:
- The timeout handler should include connection context in error messages
- Missing retry mechanism for initial container startup
- Timer cleanup is inconsistent across handlers
Apply this diff to improve the implementation:
async start(): Promise<Result<void>> { const onOpenFuture: Future<Result<unknown>> = new Future<Result<unknown>>(); + let retries = 0; + const maxRetries = 3; + const baseDelay = 1000; + + const connect = async (): Promise<Result<void>> => { const timer = setTimeout(() => { const err = this.logger.Error().Dur("timeout", this.msgP.timeout).Msg("Timeout").AsError(); - this.toMsg(buildErrorMsg(this.sthis, this.logger, {} as MsgBase, err)); + this.toMsg(buildErrorMsg(this.sthis, this.logger, { tid: "internal", conn: this.conn } as MsgBase, err)); onOpenFuture.resolve(Result.Err(err)); }, this.msgP.timeout); this.ws.onopen = () => { + clearTimeout(timer); onOpenFuture.resolve(Result.Ok(undefined)); this.opened = true; }; this.ws.onerror = (ierr) => { + clearTimeout(timer); const err = this.logger.Error().Err(ierr).Msg("WS Error").AsError(); onOpenFuture.resolve(Result.Err(err)); const res = this.buildErrorMsg(this.logger.Error(), {}, err); this.toMsg(res); }; this.ws.onclose = () => { + clearTimeout(timer); this.opened = false; this.close().catch((ierr) => { const err = this.logger.Error().Err(ierr).Msg("close error").AsError(); onOpenFuture.resolve(Result.Err(err)); this.toMsg(buildErrorMsg(this.sthis, this.logger, { tid: "internal" } as MsgBase, err)); }); }; const result = await onOpenFuture.asPromise(); if (result.isErr() && retries < maxRetries) { retries++; const delay = baseDelay * Math.pow(2, retries - 1); await new Promise(resolve => setTimeout(resolve, delay)); return connect(); } return result; + }; + + return connect(); }Likely invalid or redundant comment.
197-205
: 🛠️ Refactor suggestionAdd timeout handling for requests.
The request implementation should handle timeouts to prevent hanging requests in Docker environments.
Apply this diff to add timeout handling:
async request<Q extends MsgBase, S extends MsgBase>(req: Q, opts: RequestOpts): Promise<MsgWithError<S>> { if (!this.opened) { return buildErrorMsg(this.sthis, this.logger, req, this.logger.Error().Msg("Connection not open").AsError()); } const future = new Future<S>(); this.waitForTid.set(req.tid, { tid: req.tid, future, waitFor: opts.waitFor, timeout: opts.timeout }); await this.sendMsg(req); - return future.asPromise(); + const promise = future.asPromise(); + if (opts.timeout) { + return Promise.race([ + promise, + new Promise<MsgWithError<S>>((_, reject) => + setTimeout(() => reject(new Error(`Request timeout after ${opts.timeout}ms`)), opts.timeout) + ) + ]); + } + return promise; }Likely invalid or redundant comment.
src/cloud/msger.ts (1)
202-220
: 🛠️ Refactor suggestionEnhance WebSocket connection for Docker environment.
The WebSocket connection setup needs improvements for Docker container communication:
- Add connection validation
- Include Docker-specific error handling
Apply this diff to improve the implementation:
static async openWS( sthis: SuperThis, url: URI, msgP: MsgerParamsWithEnDe, exGestalt: ExchangedGestalt ): Promise<Result<MsgRawConnection>> { let ws: WebSocket; + const validateUrl = (url: string) => { + if (!url.startsWith('ws://') && !url.startsWith('wss://')) { + throw new Error('Invalid WebSocket URL scheme'); + } + }; + url = url.build().URI(); + validateUrl(url.toString()); + if (runtimeFn().isNodeIsh) { const { WebSocket } = await import("ws"); ws = new WebSocket(url.toString()) as unknown as WebSocket; } else { ws = new WebSocket(url.toString()); } + + // Add connection error handler + ws.onerror = (error) => { + sthis.logger.Error().Err(error).Msg("WebSocket connection error"); + }; + return Result.Ok(new WSConnection(sthis, ws, msgP, exGestalt)); }Likely invalid or redundant comment.
src/cloud/msg-types.ts (1)
536-567
: 🛠️ Refactor suggestionAdd retry logic for Docker URL signing.
The URL signing process should include retry logic for Docker container startup delays.
Apply this diff to add retry logic:
export async function buildRes<Q extends MsgWithTenantLedger<MsgWithConn<ReqSignedUrl>>, S extends ResSignedUrl>( method: SignedUrlParam["method"], store: FPStoreTypes, type: string, sthis: SuperThis, logger: Logger, req: Q, ctx: CalculatePreSignedUrl ): Promise<MsgWithError<S>> { + const maxRetries = 3; + const baseDelay = 1000; + + const attemptSignUrl = async (attempt: number): Promise<Result<URL>> => { + try { + return await ctx.calculatePreSignedUrl(psm); + } catch (error) { + if (attempt < maxRetries) { + const delay = baseDelay * Math.pow(2, attempt); + await new Promise(resolve => setTimeout(resolve, delay)); + return attemptSignUrl(attempt + 1); + } + throw error; + } + }; + const psm = { type: "reqSignedUrl", version: req.version, params: { ...req.params, method, store, }, conn: req.conn, tenant: req.tenant, tid: req.tid, } satisfies PreSignedMsg; - const rSignedUrl = await ctx.calculatePreSignedUrl(psm); + const rSignedUrl = await attemptSignUrl(0); if (rSignedUrl.isErr()) { return buildErrorMsg(sthis, logger, req, rSignedUrl.Err()); } return { ...req, params: psm.params, type, signedUrl: rSignedUrl.Ok().toString(), } as unknown as MsgWithError<S>; }Likely invalid or redundant comment.
src/cloud/hono-server.ts (3)
164-170
: Consider environment-specific CORS configuration.The current CORS configuration uses permissive settings that might pose security risks in production. Consider moving CORS configuration to environment-specific settings.
237-240
: Implement proper cleanup in WebSocket onClose handler.The onClose handler only sets
dp
to undefined. Consider implementing proper cleanup of resources and logging the connection closure.
214-236
: Add error handling for message encoding.The WebSocket onMessage handler lacks error handling for message encoding operations. Consider wrapping the encoding operations in try-catch blocks.
src/cloud/connection.test.ts (2)
49-66
: Add error handling to refURL function.The URL signing helper function needs additional safeguards:
- Validate environment variables before use
- Add error handling for missing X-Amz-Date parameter
- Validate input ResSignedUrl parameter
323-354
: Add WAL-specific test scenarios.Current WAL tests only verify URL signing. Consider adding tests for:
- WAL entry ordering
- Conflict resolution
- Recovery scenarios
const a4f = new AwsClient({ | ||
...env.aws, | ||
region: env.aws.region || "us-east-1", | ||
service: "s3", | ||
}); | ||
const signedUrl = await a4f | ||
.sign( | ||
new Request(opUrl.toString(), { | ||
method: psm.params.method, | ||
}), | ||
{ | ||
aws: { | ||
signQuery: true, | ||
datetime: env.test?.amzDate, | ||
// datetime: env.TEST_DATE, | ||
}, | ||
} | ||
) | ||
.then((res) => res.url); | ||
return Result.Ok(URI.from(signedUrl)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling for AWS client operations.
The AWS client initialization and signing operations should include proper error handling:
- Handle network timeouts
- Handle invalid credentials
- Handle region-specific errors
Here's a suggested improvement:
const a4f = new AwsClient({
...env.aws,
region: env.aws.region || "us-east-1",
service: "s3",
});
- const signedUrl = await a4f
- .sign(
- new Request(opUrl.toString(), {
- method: psm.params.method,
- }),
- {
- aws: {
- signQuery: true,
- datetime: env.test?.amzDate,
- },
- }
- )
- .then((res) => res.url);
- return Result.Ok(URI.from(signedUrl));
+ try {
+ const signedUrl = await a4f
+ .sign(
+ new Request(opUrl.toString(), {
+ method: psm.params.method,
+ }),
+ {
+ aws: {
+ signQuery: true,
+ datetime: env.test?.amzDate,
+ },
+ }
+ )
+ .then((res) => res.url);
+ return Result.Ok(URI.from(signedUrl));
+ } catch (error) {
+ return Result.Err(new Error(`Failed to generate pre-signed URL: ${error.message}`));
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const a4f = new AwsClient({ | |
...env.aws, | |
region: env.aws.region || "us-east-1", | |
service: "s3", | |
}); | |
const signedUrl = await a4f | |
.sign( | |
new Request(opUrl.toString(), { | |
method: psm.params.method, | |
}), | |
{ | |
aws: { | |
signQuery: true, | |
datetime: env.test?.amzDate, | |
// datetime: env.TEST_DATE, | |
}, | |
} | |
) | |
.then((res) => res.url); | |
return Result.Ok(URI.from(signedUrl)); | |
const a4f = new AwsClient({ | |
...env.aws, | |
region: env.aws.region || "us-east-1", | |
service: "s3", | |
}); | |
try { | |
const signedUrl = await a4f | |
.sign( | |
new Request(opUrl.toString(), { | |
method: psm.params.method, | |
}), | |
{ | |
aws: { | |
signQuery: true, | |
datetime: env.test?.amzDate, | |
}, | |
} | |
) | |
.then((res) => res.url); | |
return Result.Ok(URI.from(signedUrl)); | |
} catch (error) { | |
return Result.Err(new Error(`Failed to generate pre-signed URL: ${error.message}`)); | |
} |
readonly #wsOnMessage = async (event: MessageEvent) => { | ||
const rMsg = await exception2Result(() => this.msgP.ende.decode(event.data) as MsgBase); | ||
if (rMsg.isErr()) { | ||
this.logger.Error().Err(rMsg).Any(event.data).Msg("Invalid message"); | ||
return; | ||
} | ||
const msg = rMsg.Ok(); | ||
const waitFor = this.waitForTid.get(msg.tid); | ||
this.#onMsg.forEach((cb) => cb(msg)); | ||
if (waitFor) { | ||
if (MsgIsError(msg)) { | ||
this.waitForTid.delete(msg.tid); | ||
waitFor.future.resolve(msg); | ||
} else if (waitFor.waitFor(msg)) { | ||
// what for a specific type | ||
this.waitForTid.delete(msg.tid); | ||
waitFor.future.resolve(msg); | ||
} else { | ||
// wild-card | ||
this.waitForTid.delete(msg.tid); | ||
waitFor.future.resolve(msg); | ||
} | ||
} | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve message handling robustness.
The message handling implementation has potential issues:
- Missing null check for
waitFor.waitFor
- Duplicate code in message handling branches
Apply this diff to improve the implementation:
readonly #wsOnMessage = async (event: MessageEvent) => {
const rMsg = await exception2Result(() => this.msgP.ende.decode(event.data) as MsgBase);
if (rMsg.isErr()) {
this.logger.Error().Err(rMsg).Any(event.data).Msg("Invalid message");
return;
}
const msg = rMsg.Ok();
const waitFor = this.waitForTid.get(msg.tid);
this.#onMsg.forEach((cb) => cb(msg));
if (waitFor) {
+ const resolveAndDelete = () => {
+ this.waitForTid.delete(msg.tid);
+ waitFor.future.resolve(msg);
+ };
if (MsgIsError(msg)) {
- this.waitForTid.delete(msg.tid);
- waitFor.future.resolve(msg);
+ resolveAndDelete();
- } else if (waitFor.waitFor(msg)) {
+ } else if (waitFor.waitFor?.(msg)) {
// what for a specific type
- this.waitForTid.delete(msg.tid);
- waitFor.future.resolve(msg);
+ resolveAndDelete();
} else {
// wild-card
- this.waitForTid.delete(msg.tid);
- waitFor.future.resolve(msg);
+ resolveAndDelete();
}
}
};
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
readonly #wsOnMessage = async (event: MessageEvent) => { | |
const rMsg = await exception2Result(() => this.msgP.ende.decode(event.data) as MsgBase); | |
if (rMsg.isErr()) { | |
this.logger.Error().Err(rMsg).Any(event.data).Msg("Invalid message"); | |
return; | |
} | |
const msg = rMsg.Ok(); | |
const waitFor = this.waitForTid.get(msg.tid); | |
this.#onMsg.forEach((cb) => cb(msg)); | |
if (waitFor) { | |
if (MsgIsError(msg)) { | |
this.waitForTid.delete(msg.tid); | |
waitFor.future.resolve(msg); | |
} else if (waitFor.waitFor(msg)) { | |
// what for a specific type | |
this.waitForTid.delete(msg.tid); | |
waitFor.future.resolve(msg); | |
} else { | |
// wild-card | |
this.waitForTid.delete(msg.tid); | |
waitFor.future.resolve(msg); | |
} | |
} | |
}; | |
readonly #wsOnMessage = async (event: MessageEvent) => { | |
const rMsg = await exception2Result(() => this.msgP.ende.decode(event.data) as MsgBase); | |
if (rMsg.isErr()) { | |
this.logger.Error().Err(rMsg).Any(event.data).Msg("Invalid message"); | |
return; | |
} | |
const msg = rMsg.Ok(); | |
const waitFor = this.waitForTid.get(msg.tid); | |
this.#onMsg.forEach((cb) => cb(msg)); | |
if (waitFor) { | |
const resolveAndDelete = () => { | |
this.waitForTid.delete(msg.tid); | |
waitFor.future.resolve(msg); | |
}; | |
if (MsgIsError(msg)) { | |
resolveAndDelete(); | |
} else if (waitFor.waitFor?.(msg)) { | |
// what for a specific type | |
resolveAndDelete(); | |
} else { | |
// wild-card | |
resolveAndDelete(); | |
} | |
} | |
}; |
bind<Q extends MsgBase, S extends MsgBase>(req: Q, opts: RequestOpts): ReadableStream<MsgWithError<S>> { | ||
const state: ActiveStream<S, Q> = { | ||
id: this.sthis.nextId().str, | ||
bind: { | ||
msg: req, | ||
opts, | ||
}, | ||
// timeout: undefined, | ||
// controller: undefined, | ||
} satisfies ActiveStream<S, Q>; | ||
this.activeBinds.set(state.id, state); | ||
return new ReadableStream<MsgWithError<S>>({ | ||
cancel: () => { | ||
// clearTimeout(state.timeout as number); | ||
this.activeBinds.delete(state.id); | ||
}, | ||
start: (controller) => { | ||
this.onMsg<S>((msg) => { | ||
if (MsgIsError(msg)) { | ||
controller.enqueue(msg); | ||
return; | ||
} | ||
if (!MsgIsTid(msg, req.tid)) { | ||
return; | ||
} | ||
if (opts.waitFor && opts.waitFor(msg)) { | ||
controller.enqueue(msg); | ||
} | ||
}); | ||
this.sendMsg(req); | ||
const future = new Future<S>(); | ||
this.waitForTid.set(req.tid, { tid: req.tid, future, waitFor: opts.waitFor, timeout: opts.timeout }); | ||
future.asPromise().then((msg) => { | ||
if (MsgIsError(msg)) { | ||
// double err emitting | ||
controller.enqueue(msg); | ||
controller.close(); | ||
} | ||
}); | ||
}, | ||
}); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance stream handling and resource cleanup.
The stream implementation needs improvements in error handling and resource cleanup:
- Missing error handling in stream setup
- Resources might leak if stream setup fails
- Commented out timeout cleanup
Apply this diff to improve the implementation:
return new ReadableStream<MsgWithError<S>>({
cancel: () => {
- // clearTimeout(state.timeout as number);
+ try {
+ // Cleanup any remaining resources
+ if (state.timeout) clearTimeout(state.timeout as number);
+ this.activeBinds.delete(state.id);
+ } catch (err) {
+ this.logger.Error().Err(err).Msg("Error during stream cleanup");
+ }
- this.activeBinds.delete(state.id);
},
start: (controller) => {
+ try {
this.onMsg<S>((msg) => {
if (MsgIsError(msg)) {
controller.enqueue(msg);
return;
}
if (!MsgIsTid(msg, req.tid)) {
return;
}
if (opts.waitFor && opts.waitFor(msg)) {
controller.enqueue(msg);
}
});
this.sendMsg(req);
const future = new Future<S>();
this.waitForTid.set(req.tid, { tid: req.tid, future, waitFor: opts.waitFor, timeout: opts.timeout });
future.asPromise().then((msg) => {
if (MsgIsError(msg)) {
// double err emitting
controller.enqueue(msg);
controller.close();
}
});
+ } catch (err) {
+ controller.error(err);
+ }
},
});
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
bind<Q extends MsgBase, S extends MsgBase>(req: Q, opts: RequestOpts): ReadableStream<MsgWithError<S>> { | |
const state: ActiveStream<S, Q> = { | |
id: this.sthis.nextId().str, | |
bind: { | |
msg: req, | |
opts, | |
}, | |
// timeout: undefined, | |
// controller: undefined, | |
} satisfies ActiveStream<S, Q>; | |
this.activeBinds.set(state.id, state); | |
return new ReadableStream<MsgWithError<S>>({ | |
cancel: () => { | |
// clearTimeout(state.timeout as number); | |
this.activeBinds.delete(state.id); | |
}, | |
start: (controller) => { | |
this.onMsg<S>((msg) => { | |
if (MsgIsError(msg)) { | |
controller.enqueue(msg); | |
return; | |
} | |
if (!MsgIsTid(msg, req.tid)) { | |
return; | |
} | |
if (opts.waitFor && opts.waitFor(msg)) { | |
controller.enqueue(msg); | |
} | |
}); | |
this.sendMsg(req); | |
const future = new Future<S>(); | |
this.waitForTid.set(req.tid, { tid: req.tid, future, waitFor: opts.waitFor, timeout: opts.timeout }); | |
future.asPromise().then((msg) => { | |
if (MsgIsError(msg)) { | |
// double err emitting | |
controller.enqueue(msg); | |
controller.close(); | |
} | |
}); | |
}, | |
}); | |
} | |
bind<Q extends MsgBase, S extends MsgBase>(req: Q, opts: RequestOpts): ReadableStream<MsgWithError<S>> { | |
const state: ActiveStream<S, Q> = { | |
id: this.sthis.nextId().str, | |
bind: { | |
msg: req, | |
opts, | |
}, | |
// timeout: undefined, | |
// controller: undefined, | |
} satisfies ActiveStream<S, Q>; | |
this.activeBinds.set(state.id, state); | |
return new ReadableStream<MsgWithError<S>>({ | |
cancel: () => { | |
try { | |
// Cleanup any remaining resources | |
if (state.timeout) clearTimeout(state.timeout as number); | |
this.activeBinds.delete(state.id); | |
} catch (err) { | |
this.logger.Error().Err(err).Msg("Error during stream cleanup"); | |
} | |
}, | |
start: (controller) => { | |
try { | |
this.onMsg<S>((msg) => { | |
if (MsgIsError(msg)) { | |
controller.enqueue(msg); | |
return; | |
} | |
if (!MsgIsTid(msg, req.tid)) { | |
return; | |
} | |
if (opts.waitFor && opts.waitFor(msg)) { | |
controller.enqueue(msg); | |
} | |
}); | |
this.sendMsg(req); | |
const future = new Future<S>(); | |
this.waitForTid.set(req.tid, { tid: req.tid, future, waitFor: opts.waitFor, timeout: opts.timeout }); | |
future.asPromise().then((msg) => { | |
if (MsgIsError(msg)) { | |
// double err emitting | |
controller.enqueue(msg); | |
controller.close(); | |
} | |
}); | |
} catch (err) { | |
controller.error(err); | |
} | |
}, | |
}); | |
} |
🧰 Tools
🪛 Biome (1.9.4)
[error] 179-179: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
calculatePreSignedUrl(p: PreSignedMsg): Promise<Result<URI>> { | ||
const rRes = this.sthis.env.gets({ | ||
STORAGE_URL: param.REQUIRED, | ||
ACCESS_KEY_ID: param.REQUIRED, | ||
SECRET_ACCESS_KEY: param.REQUIRED, | ||
REGION: "us-east-1", | ||
}); | ||
if (rRes.isErr()) { | ||
return Promise.resolve(Result.Err(rRes.Err())); | ||
} | ||
const res = rRes.Ok(); | ||
return calculatePreSignedUrl(p, { | ||
storageUrl: URI.from(res.STORAGE_URL), | ||
aws: { | ||
accessKeyId: res.ACCESS_KEY_ID, | ||
secretAccessKey: res.SECRET_ACCESS_KEY, | ||
region: res.REGION, | ||
}, | ||
}); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add input validation to calculatePreSignedUrl method.
The method should validate:
- Input PreSignedMsg parameter
- Required environment variables before use
calculatePreSignedUrl(p: PreSignedMsg): Promise<Result<URI>> {
+ if (!p) {
+ return Promise.resolve(Result.Err(new Error('Invalid PreSignedMsg: parameter is required')));
+ }
const rRes = this.sthis.env.gets({
STORAGE_URL: param.REQUIRED,
ACCESS_KEY_ID: param.REQUIRED,
SECRET_ACCESS_KEY: param.REQUIRED,
REGION: "us-east-1",
});
if (rRes.isErr()) {
return Promise.resolve(Result.Err(rRes.Err()));
}
const res = rRes.Ok();
+ if (!res.STORAGE_URL || !res.ACCESS_KEY_ID || !res.SECRET_ACCESS_KEY) {
+ return Promise.resolve(Result.Err(new Error('Missing required environment variables')));
+ }
return calculatePreSignedUrl(p, {
storageUrl: URI.from(res.STORAGE_URL),
aws: {
accessKeyId: res.ACCESS_KEY_ID,
secretAccessKey: res.SECRET_ACCESS_KEY,
region: res.REGION,
},
});
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
calculatePreSignedUrl(p: PreSignedMsg): Promise<Result<URI>> { | |
const rRes = this.sthis.env.gets({ | |
STORAGE_URL: param.REQUIRED, | |
ACCESS_KEY_ID: param.REQUIRED, | |
SECRET_ACCESS_KEY: param.REQUIRED, | |
REGION: "us-east-1", | |
}); | |
if (rRes.isErr()) { | |
return Promise.resolve(Result.Err(rRes.Err())); | |
} | |
const res = rRes.Ok(); | |
return calculatePreSignedUrl(p, { | |
storageUrl: URI.from(res.STORAGE_URL), | |
aws: { | |
accessKeyId: res.ACCESS_KEY_ID, | |
secretAccessKey: res.SECRET_ACCESS_KEY, | |
region: res.REGION, | |
}, | |
}); | |
} | |
calculatePreSignedUrl(p: PreSignedMsg): Promise<Result<URI>> { | |
if (!p) { | |
return Promise.resolve(Result.Err(new Error('Invalid PreSignedMsg: parameter is required'))); | |
} | |
const rRes = this.sthis.env.gets({ | |
STORAGE_URL: param.REQUIRED, | |
ACCESS_KEY_ID: param.REQUIRED, | |
SECRET_ACCESS_KEY: param.REQUIRED, | |
REGION: "us-east-1", | |
}); | |
if (rRes.isErr()) { | |
return Promise.resolve(Result.Err(rRes.Err())); | |
} | |
const res = rRes.Ok(); | |
if (!res.STORAGE_URL || !res.ACCESS_KEY_ID || !res.SECRET_ACCESS_KEY) { | |
return Promise.resolve(Result.Err(new Error('Missing required environment variables'))); | |
} | |
return calculatePreSignedUrl(p, { | |
storageUrl: URI.from(res.STORAGE_URL), | |
aws: { | |
accessKeyId: res.ACCESS_KEY_ID, | |
secretAccessKey: res.SECRET_ACCESS_KEY, | |
region: res.REGION, | |
}, | |
}); | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 10
♻️ Duplicate comments (1)
src/cloud/meta-merger/meta-merger.test.ts (1)
18-20
:⚠️ Potential issueOptimize array concatenation performance.
Using spread operator in reduce has O(n²) time complexity. Use Array.prototype.push for better performance.
Apply this diff to optimize the implementation:
-function toCRDTEntries(rows: MetaConnection[]) { - return rows.reduce((r, i) => [...r, ...i.metas], [] as CRDTEntry[]); +function toCRDTEntries(rows: MetaConnection[]) { + const result: CRDTEntry[] = []; + for (const row of rows) { + result.push(...row.metas); + } + return result; +}🧰 Tools
🪛 Biome (1.9.4)
[error] 19-19: Avoid the use of spread (
...
) syntax on accumulators.Spread syntax should be avoided on accumulators (like those in
.reduce
) because it causes a time complexity ofO(n^2)
.
Consider methods such as .splice or .push instead.(lint/performance/noAccumulatingSpread)
🧹 Nitpick comments (14)
src/cloud/backend/cf-dobj-abstract-sql.ts (1)
13-16
: Add type validation for SQL results.The type casting of SQL results using
as T
andas T[]
could be unsafe if the SQL execution returns unexpected data. Consider adding runtime type validation to ensure type safety.async run<T>(...params: SQLParams): Promise<T> { const res = (await this.db.dobj.execSql(this.sql, params)) as ExecSQLResult; - return res.rawResults[0] as T; + const result = res.rawResults[0]; + if (!result || typeof result !== 'object') { + throw new Error('Invalid SQL result'); + } + return result as T; } async all<T>(...params: SQLParams): Promise<T[]> { const res = (await this.db.dobj.execSql(this.sql, params)) as ExecSQLResult; - return res.rawResults as T[]; + if (!Array.isArray(res.rawResults)) { + throw new Error('Invalid SQL results array'); + } + return res.rawResults as T[]; }Also applies to: 17-20
src/cloud/backend/env.d.ts (1)
13-24
: Implement secure credential handling.The interface includes sensitive credentials like access keys, tokens, and private keys. Ensure these are:
- Never logged or exposed in error messages
- Stored securely using environment variables
- Rotated regularly
- Accessed only through secure methods
src/cloud/test-helper.ts (2)
42-42
: Make timeouts configurable and improve error handling.The timeout values are hardcoded. Consider:
- Making timeouts configurable through parameters
- Adding retry logic for transient failures
- Improving error handling in timeout scenarios
Also applies to: 57-57, 81-81
164-178
: Make database path configurable and add error handling.Consider:
- Making the SQLite database path configurable
- Adding error handling for server startup failures
- Adding cleanup logic for failed server initialization
src/cloud/backend/cf-hono-server.ts (1)
113-130
: Adapt CFHonoServer for Docker environment.As part of the transition to Docker, consider:
- Removing Cloudflare-specific dependencies
- Implementing proper WebSocket handling for Docker environment
- Adding comprehensive error handling
src/cloud/connection.test.ts (1)
76-91
: Add cleanup in afterAll hook.Ensure proper cleanup of server resources after tests complete.
afterAll(async () => { - await server.close(); + try { + await server.close(); + } catch (error) { + console.error('Error during server cleanup:', error); + } finally { + // Additional cleanup if needed + } });src/cloud/meta-merger/bettersql-abstract-sql.ts (2)
13-13
: Replaceconsole.log
statements with proper logging or removeUsing
console.log
for logging database operations may not be suitable in production. Consider using a dedicated logging library or remove these statements if not needed.Apply this diff to remove the
console.log
statements:const res = this.stmt.run(...sqliteCoerceParams(iparams)) as T; - console.log("run", res); return res;
const res = this.stmt.all(...sqliteCoerceParams(params)) as T[]; - console.log("all", res); return res;
Also applies to: 18-18
25-31
: Add error handling for database initializationWhen initializing the database, exceptions may occur if the database file cannot be opened. Add error handling to gracefully handle such scenarios.
Example:
constructor(dbOrPath: Database.Database | string) { try { if (typeof dbOrPath === "string") { this.db = new Database(dbOrPath); } else { this.db = dbOrPath; } } catch (error) { // Handle the error appropriately throw new Error(`Failed to initialize database: ${error.message}`); } }src/cloud/meta-merger/meta-by-tenant-ledger.ts (2)
28-35
: Remove unused commented codeThe commented SQL example from lines 28-35 is not needed and can be removed to keep the codebase clean.
Apply this diff:
-/* -SELECT * FROM Mitarbeiter e1 -WHERE NOT EXISTS -( - SELECT 1 FROM Mitarbeiter e2 - WHERE e1.employee_id=e2.employee_id und e2.employee_name LIKE 'A%' -); - */
106-129
: Remove unnecessary commented-out codeThe large block of commented-out code from lines 106-129 increases clutter. If this code is no longer needed, consider removing it. Version control can be used to retrieve past code if necessary.
Apply this diff:
- /* - * select * from MetaByTenantLedger where tenant = 'tenant' and ledger = 'ledger' group by metaCID - */ - - // readonly #sqlSelectByMetaCIDs = new ResolveOnce<Statement>() - // sqlSelectByMetaCIDs(): Statement<string[], SQLMetaByTenantLedgerRow> { - // return this.#sqlSelectByMetaCIDs.once(() => { - // return this.db.prepare(` - // SELECT tenant, ledger, reqId, resId, metaCID, meta, updatedAt - // FROM MetaByTenantLedger - // WHERE metaCID in ? - // `); - // }) - // } - // async selectByMetaCIDs(metaCIDs: string[]): Promise<MetaByTenantLedgerRow[]> { - // const stmt = this.sqlSelectByMetaCIDs(); - // const rows = await stmt.all(metaCIDs) - // return rows.map(row => ({ - // ...row, - // meta: JSON.parse(row.meta), - // updatedAt: new Date(row.updatedAt) - // } satisfies MetaByTenantLedgerRow)) - // }src/cloud/meta-merger/cf-worker-abstract-sql.ts (1)
13-13
: Remove commented-out debugging codeThe commented-out
console.log
statement is not needed and can be removed to clean up the code.Apply this diff:
- // console.log("cf-run", sqliteCoerceParams(iparams), bound);
src/cloud/meta-merger/meta-merger.test.ts (3)
22-28
: Remove commented-out code.The commented-out
filterConnection
function should be removed if it's no longer needed. If it's for future use, consider tracking it in a separate issue.
56-56
: Make database path configurable.The database path "./dist/test.db" is hardcoded. Consider making it configurable through environment variables or test configuration.
96-244
: Consider adding more test cases for robustness.While the current test coverage is good, consider adding tests for:
- Error handling when database operations fail
- Concurrent operations to verify thread safety
- Edge cases like invalid inputs or database connection issues
Would you like me to help generate additional test cases for these scenarios?
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
src/cloud/backend/cf-dobj-abstract-sql.ts
(1 hunks)src/cloud/backend/cf-hono-server.ts
(1 hunks)src/cloud/backend/env.d.ts
(1 hunks)src/cloud/backend/server.ts
(1 hunks)src/cloud/backend/wrangler.toml
(1 hunks)src/cloud/client/cloud-gateway.test.ts
(1 hunks)src/cloud/connection.test.ts
(1 hunks)src/cloud/meta-merger/bettersql-abstract-sql.ts
(1 hunks)src/cloud/meta-merger/cf-worker-abstract-sql.ts
(1 hunks)src/cloud/meta-merger/meta-by-tenant-ledger.ts
(1 hunks)src/cloud/meta-merger/meta-merger.test.ts
(1 hunks)src/cloud/meta-merger/tenant-ledger.ts
(1 hunks)src/cloud/test-helper.ts
(1 hunks)vitest.cf-worker.config.ts
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- vitest.cf-worker.config.ts
- src/cloud/client/cloud-gateway.test.ts
- src/cloud/meta-merger/tenant-ledger.ts
🧰 Additional context used
🪛 Biome (1.9.4)
src/cloud/backend/cf-hono-server.ts
[error] 57-57: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
[error] 57-57: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
[error] 83-83: Useless case clause.
because the default clause is present:
Unsafe fix: Remove the useless case.
(lint/complexity/noUselessSwitchCase)
src/cloud/meta-merger/meta-merger.test.ts
[error] 19-19: Avoid the use of spread (...
) syntax on accumulators.
Spread syntax should be avoided on accumulators (like those in .reduce
) because it causes a time complexity of O(n^2)
.
Consider methods such as .splice or .push instead.
(lint/performance/noAccumulatingSpread)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Quality Checks
🔇 Additional comments (9)
src/cloud/backend/wrangler.toml (1)
34-36
: Security: Avoid hardcoded credentials in configuration filesThe configuration contains hardcoded credentials for the storage backend. Even for testing purposes, it's better to use environment variables.
Also applies to: 65-67, 80-82, 96-98, 113-115
src/cloud/test-helper.ts (3)
115-115
: Make timeouts configurable and improve error handling.The timeout values are hardcoded. Consider making them configurable and adding retry logic.
Also applies to: 130-130, 144-144
152-162
: Validate TOML file existence and parse errorsThe function needs better error handling for missing or malformed TOML files.
179-217
: Improve process management and error handlingThe server factory needs improvements in process cleanup, timeout handling, and error propagation.
src/cloud/backend/cf-hono-server.ts (2)
164-167
: Implement proper WebSocket event handling.The onopen event handler is commented out and error handling could be improved.
Also applies to: 172-175
4-4
: Consider replacing Cloudflare dependency for Docker transition.The import from "cloudflare:workers" should be replaced with a Docker-compatible alternative to align with the PR's objective of transitioning to a local Docker setup.
src/cloud/connection.test.ts (1)
49-66
:⚠️ Potential issueAdd input validation to refURL function.
The function should validate the input parameters and environment variables before use.
async function refURL(sp: ResOptionalSignedUrl) { + if (!sp?.signedUrl) { + throw new Error('Invalid ResSignedUrl: missing signedUrl'); + } const { env } = await resolveToml("D1"); + const requiredEnvVars = ['STORAGE_URL', 'ACCESS_KEY_ID', 'SECRET_ACCESS_KEY', 'REGION']; + for (const envVar of requiredEnvVars) { + if (!env[envVar]) { + throw new Error(`Missing required environment variable: ${envVar}`); + } + } return ( await calculatePreSignedUrl(sp, { storageUrl: URI.from(env.STORAGE_URL),Likely invalid or redundant comment.
src/cloud/meta-merger/meta-by-tenant-ledger.ts (1)
76-85
: Consider usingINSERT OR IGNORE
for handling duplicatesTo handle potential insert conflicts more efficiently, use
INSERT OR IGNORE
instead ofWHERE NOT EXISTS
. This simplifies the SQL and improves performance.Apply this diff to modify the insert statement:
return this.db.prepare(` - INSERT INTO MetaByTenantLedger(tenant, ledger, reqId, resId, metaCID, meta, updatedAt) - SELECT ?, ?, ?, ?, ?, ?, ? WHERE NOT EXISTS ( - SELECT 1 FROM MetaByTenantLedger WHERE metaCID = ? - ) + INSERT OR IGNORE INTO MetaByTenantLedger(tenant, ledger, reqId, resId, metaCID, meta, updatedAt) + VALUES (?, ?, ?, ?, ?, ?, ?) `);Update the
ensure
method accordingly:return stmt.run( t.tenant, t.ledger, t.reqId, t.resId, t.metaCID, JSON.stringify(t.meta), t.updatedAt.toISOString(), - t.metaCID );
src/cloud/meta-merger/meta-merger.test.ts (1)
63-94
: Well-structured test setup!The test setup follows best practices with proper isolation, cleanup, and type safety:
- Uses unique IDs for each test
- Cleans up after each test
- Properly typed with TypeScript's satisfies operator
prepare(sql: string): SQLStatement { | ||
return new CFDObjSQLStatement(this, sql); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add SQL injection prevention.
The SQL string is passed directly to the statement without validation. Consider adding SQL injection prevention measures.
prepare(sql: string): SQLStatement {
+ // Validate SQL string to prevent SQL injection
+ if (sql.toLowerCase().includes('drop') || sql.toLowerCase().includes('truncate')) {
+ throw new Error('Potentially harmful SQL statement detected');
+ }
return new CFDObjSQLStatement(this, sql);
}
Committable suggestion skipped: line range outside the PR's diff.
src/cloud/backend/wrangler.toml
Outdated
[[env.test.d1_databases]] | ||
binding = "FP_D1" | ||
database_name = "test-meta-merge" | ||
database_id = "b8b452ed-b1d9-478f-b56c-c5e4545f157a" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use unique database IDs for different environments.
Using the same database ID (b8b452ed-b1d9-478f-b56c-c5e4545f157a
) across different environments could lead to data conflicts. Consider using unique database IDs for each environment.
Also applies to: 73-76, 105-108
src/cloud/backend/cf-hono-server.ts
Outdated
export function getDurableObject(env: Env) { | ||
// console.log("getDurableObject", env); | ||
const cfBackendKey = env.CF_BACKEND_KEY ?? "FP_DO"; | ||
const rany = env as unknown as Record<string, DurableObjectNamespace<FPDurableObject>>; | ||
const dObjNs = rany[cfBackendKey]; | ||
const id = dObjNs.idFromName(env.FP_DO_ID ?? cfBackendKey); | ||
return dObjNs.get(id); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling for environment variables.
The function should validate environment variables before accessing them. The type casting to unknown
is also a potential source of runtime errors.
export function getDurableObject(env: Env) {
- const cfBackendKey = env.CF_BACKEND_KEY ?? "FP_DO";
- const rany = env as unknown as Record<string, DurableObjectNamespace<FPDurableObject>>;
- const dObjNs = rany[cfBackendKey];
- const id = dObjNs.idFromName(env.FP_DO_ID ?? cfBackendKey);
- return dObjNs.get(id);
+ const cfBackendKey = env.CF_BACKEND_KEY ?? "FP_DO";
+ if (!env[cfBackendKey]) {
+ throw new Error(`Missing required environment variable: ${cfBackendKey}`);
+ }
+ const dObjNs = env[cfBackendKey] as DurableObjectNamespace<FPDurableObject>;
+ if (!dObjNs || typeof dObjNs.idFromName !== 'function') {
+ throw new Error(`Invalid DurableObjectNamespace for key: ${cfBackendKey}`);
+ }
+ const doId = env.FP_DO_ID ?? cfBackendKey;
+ const id = dObjNs.idFromName(doId);
+ return dObjNs.get(id);
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
export function getDurableObject(env: Env) { | |
// console.log("getDurableObject", env); | |
const cfBackendKey = env.CF_BACKEND_KEY ?? "FP_DO"; | |
const rany = env as unknown as Record<string, DurableObjectNamespace<FPDurableObject>>; | |
const dObjNs = rany[cfBackendKey]; | |
const id = dObjNs.idFromName(env.FP_DO_ID ?? cfBackendKey); | |
return dObjNs.get(id); | |
} | |
export function getDurableObject(env: Env) { | |
const cfBackendKey = env.CF_BACKEND_KEY ?? "FP_DO"; | |
if (!env[cfBackendKey]) { | |
throw new Error(`Missing required environment variable: ${cfBackendKey}`); | |
} | |
const dObjNs = env[cfBackendKey] as DurableObjectNamespace<FPDurableObject>; | |
if (!dObjNs || typeof dObjNs.idFromName !== 'function') { | |
throw new Error(`Invalid DurableObjectNamespace for key: ${cfBackendKey}`); | |
} | |
const doId = env.FP_DO_ID ?? cfBackendKey; | |
const id = dObjNs.idFromName(doId); | |
return dObjNs.get(id); | |
} |
src/cloud/backend/server.ts
Outdated
export class FPDurableObject extends DurableObject<Env> { | ||
// readonly sessions: Map<WebSocket, FPMetaGroup> = new Map<WebSocket, FPMetaGroup>(); | ||
// readonly lastMetaByTendant = new Map<string, ReqRes<ReqPutMeta, ResPutMeta>[]>(); | ||
// readonly wsEvents: WSEvents = {}; | ||
|
||
// injectWSEvents(wsEvents: WSEvents): void { | ||
// Object.assign(this.wsEvents, wsEvents); | ||
// } | ||
// getSQLDatabase(): SQLDatabase { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Refactor FPDurableObject for Docker environment.
The class needs to be adapted for the Docker transition:
- Remove Cloudflare DurableObject dependency
- Implement proper error handling
- Add proper type definitions
async execSql(sql: string, params: unknown[]): Promise<ExecSQLResult> { | ||
const cursor = await this.ctx.storage.sql.exec(sql, ...params); | ||
const rawResults = cursor.toArray(); | ||
const res = { | ||
rowsRead: cursor.rowsRead, | ||
rowsWritten: cursor.rowsWritten, | ||
rawResults, | ||
}; | ||
// console.log("execSql", sql, params, res); | ||
return res; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling to execSql method.
The SQL execution needs proper error handling and parameter validation.
async execSql(sql: string, params: unknown[]): Promise<ExecSQLResult> {
+ if (!sql) {
+ throw new Error('SQL query is required');
+ }
+ if (!Array.isArray(params)) {
+ throw new Error('Parameters must be an array');
+ }
+ try {
const cursor = await this.ctx.storage.sql.exec(sql, ...params);
const rawResults = cursor.toArray();
const res = {
rowsRead: cursor.rowsRead,
rowsWritten: cursor.rowsWritten,
rawResults,
};
return res;
+ } catch (error) {
+ throw new Error(`SQL execution failed: ${error instanceof Error ? error.message : 'Unknown error'}`);
+ }
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
async execSql(sql: string, params: unknown[]): Promise<ExecSQLResult> { | |
const cursor = await this.ctx.storage.sql.exec(sql, ...params); | |
const rawResults = cursor.toArray(); | |
const res = { | |
rowsRead: cursor.rowsRead, | |
rowsWritten: cursor.rowsWritten, | |
rawResults, | |
}; | |
// console.log("execSql", sql, params, res); | |
return res; | |
} | |
async execSql(sql: string, params: unknown[]): Promise<ExecSQLResult> { | |
if (!sql) { | |
throw new Error('SQL query is required'); | |
} | |
if (!Array.isArray(params)) { | |
throw new Error('Parameters must be an array'); | |
} | |
try { | |
const cursor = await this.ctx.storage.sql.exec(sql, ...params); | |
const rawResults = cursor.toArray(); | |
const res = { | |
rowsRead: cursor.rowsRead, | |
rowsWritten: cursor.rowsWritten, | |
rawResults, | |
}; | |
return res; | |
} catch (error) { | |
throw new Error(`SQL execution failed: ${error instanceof Error ? error.message : 'Unknown error'}`); | |
} | |
} |
async run<T>(...iparams: SQLParams): Promise<T> { | ||
const res = (await this.stmt.run(...sqliteCoerceParams(iparams))) as T; | ||
console.log("run", res); | ||
return res; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor to remove unnecessary async
and await
The better-sqlite3
library methods are synchronous. Using async
and await
in the run
method is unnecessary and may cause confusion. Refactor the method to be synchronous.
Apply this diff to the run
method:
- async run<T>(...iparams: SQLParams): Promise<T> {
- const res = (await this.stmt.run(...sqliteCoerceParams(iparams))) as T;
+ run<T>(...iparams: SQLParams): T {
+ const res = this.stmt.run(...sqliteCoerceParams(iparams)) as T;
console.log("run", res);
return res;
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
async run<T>(...iparams: SQLParams): Promise<T> { | |
const res = (await this.stmt.run(...sqliteCoerceParams(iparams))) as T; | |
console.log("run", res); | |
return res; | |
} | |
run<T>(...iparams: SQLParams): T { | |
const res = this.stmt.run(...sqliteCoerceParams(iparams)) as T; | |
console.log("run", res); | |
return res; | |
} |
async all<T>(...params: SQLParams): Promise<T[]> { | ||
const res = (await this.stmt.all(...sqliteCoerceParams(params))) as T[]; | ||
console.log("all", res); | ||
return res; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor all
method to be synchronous
Similarly, the all
method in better-sqlite3
is synchronous. Remove async
and await
to simplify the code.
Apply this diff to the all
method:
- async all<T>(...params: SQLParams): Promise<T[]> {
- const res = (await this.stmt.all(...sqliteCoerceParams(params))) as T[];
+ all<T>(...params: SQLParams): T[] {
+ const res = this.stmt.all(...sqliteCoerceParams(params)) as T[];
console.log("all", res);
return res;
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
async all<T>(...params: SQLParams): Promise<T[]> { | |
const res = (await this.stmt.all(...sqliteCoerceParams(params))) as T[]; | |
console.log("all", res); | |
return res; | |
} | |
all<T>(...params: SQLParams): T[] { | |
const res = this.stmt.all(...sqliteCoerceParams(params)) as T[]; | |
console.log("all", res); | |
return res; | |
} |
readonly resId: string; | ||
readonly metaCID: string; | ||
readonly meta: CRDTEntry; | ||
readonly updateAt: Date; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Fix inconsistent use of updateAt
and updatedAt
There is an inconsistency between the property names updateAt
and updatedAt
. This can lead to bugs or confusion. Standardize the naming to updatedAt
throughout the code.
Apply this diff to the MetaByTenantLedgerRow
interface and related code:
- readonly updateAt: Date;
+ readonly updatedAt: Date;
In the SQL mapping (line 169):
- updateAt: new Date(row.updatedAt),
+ updatedAt: new Date(row.updatedAt),
Also applies to: 24-24, 51-51, 168-169
async all<T>(...params: SQLParams): Promise<T[]> { | ||
const rows = await this.stmt.bind(...sqliteCoerceParams(params)).run(); | ||
return rows.results as T[]; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use all()
instead of run()
to retrieve query results
In the all
method, run()
does not return the results of a SELECT query. Use all()
to retrieve query results when working with Cloudflare D1.
Apply this diff to correct the all
method:
async all<T>(...params: SQLParams): Promise<T[]> {
const bound = this.stmt.bind(...sqliteCoerceParams(params));
- const rows = await bound.run();
+ const result = await bound.all();
+ const rows = result.results;
return rows as T[];
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
async all<T>(...params: SQLParams): Promise<T[]> { | |
const rows = await this.stmt.bind(...sqliteCoerceParams(params)).run(); | |
return rows.results as T[]; | |
} | |
async all<T>(...params: SQLParams): Promise<T[]> { | |
const bound = this.stmt.bind(...sqliteCoerceParams(params)); | |
const result = await bound.all(); | |
const rows = result.results; | |
return rows as T[]; | |
} |
async run<T>(...iparams: SQLParams): Promise<T> { | ||
const bound = this.stmt.bind(...sqliteCoerceParams(iparams)); | ||
// console.log("cf-run", sqliteCoerceParams(iparams), bound); | ||
return bound.run() as T; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Update return type and handling in run
method
The run()
method returns a D1Result
, which may not match the expected type T
. Adjust the return type to reflect the actual return value and handle any potential errors.
Modify the run
method:
- async run<T>(...iparams: SQLParams): Promise<T> {
+ async run(...iparams: SQLParams): Promise<D1Result> {
const bound = this.stmt.bind(...sqliteCoerceParams(iparams));
return bound.run();
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
async run<T>(...iparams: SQLParams): Promise<T> { | |
const bound = this.stmt.bind(...sqliteCoerceParams(iparams)); | |
// console.log("cf-run", sqliteCoerceParams(iparams), bound); | |
return bound.run() as T; | |
} | |
async run(...iparams: SQLParams): Promise<D1Result> { | |
const bound = this.stmt.bind(...sqliteCoerceParams(iparams)); | |
// console.log("cf-run", sqliteCoerceParams(iparams), bound); | |
return bound.run(); | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (4)
src/cloud/backend/wrangler.toml (2)
34-36
:⚠️ Potential issueSecurity: Avoid hardcoded credentials in configuration files
The configuration contains hardcoded credentials for the storage backend. Even for testing purposes, it's better to use environment variables.
48-51
:⚠️ Potential issueUse unique database IDs for different environments.
Using the same database ID across different environments could lead to data conflicts. Consider using unique database IDs for each environment.
Also applies to: 73-76, 105-108
src/cloud/connection.test.ts (2)
49-66
:⚠️ Potential issueAdd error handling and validation to refURL function.
The function lacks input validation and error handling for critical parameters.
Apply this diff to add necessary safeguards:
-async function refURL(sp: ResOptionalSignedUrl) { +async function refURL(sp: ResOptionalSignedUrl) { + if (!sp?.signedUrl) { + throw new Error('Invalid ResOptionalSignedUrl: missing signedUrl'); + } + const { env } = await resolveToml("D1"); + + // Validate required environment variables + const requiredEnvVars = ['STORAGE_URL', 'ACCESS_KEY_ID', 'SECRET_ACCESS_KEY', 'REGION']; + for (const envVar of requiredEnvVars) { + if (!env[envVar]) { + throw new Error(`Missing required environment variable: ${envVar}`); + } + } + + const amzDate = URI.from(sp.signedUrl).getParam("X-Amz-Date"); + if (!amzDate) { + throw new Error('Missing X-Amz-Date parameter in signed URL'); + } return ( await calculatePreSignedUrl(sp, { storageUrl: URI.from(env.STORAGE_URL), aws: { accessKeyId: env.ACCESS_KEY_ID, secretAccessKey: env.SECRET_ACCESS_KEY, region: env.REGION, }, test: { - amzDate: URI.from(sp.signedUrl).getParam("X-Amz-Date"), + amzDate, }, }) ) .Ok() .asObj(); }
325-356
: 🛠️ Refactor suggestionEnhance WAL testing scenarios.
Current WAL tests only verify URL signing. Additional scenarios should be tested.
Add the following test scenarios:
- WAL entry ordering
- Conflict resolution
- Recovery scenarios
- Concurrent WAL operations
Example implementation:
describe("WAL", async () => { // Existing tests... it("maintains WAL entry order", async () => { const sp = sup(); const entries = Array(5).fill(0).map(() => ({ id: sthis.timeOrderedNextId().str, data: "test" })); // Write entries for (const entry of entries) { const res = await conn.request(buildReqPutWAL(sthis, { ...sp, entry }, gwCtx), { waitFor: MsgIsResPutWAL }); expect(MsgIsResPutWAL(res)).toBeTruthy(); } // Read and verify order const res = await conn.request(buildReqGetWAL(sthis, sp, gwCtx), { waitFor: MsgIsResGetWAL }); expect(MsgIsResGetWAL(res)).toBeTruthy(); // Verify entries are in order const readEntries = await fetchWALEntries(res.signedUrl); expect(readEntries).toEqual(entries); }); it("handles concurrent WAL operations", async () => { const sp = sup(); const operations = Array(5).fill(0).map(() => conn.request(buildReqPutWAL(sthis, sp, gwCtx), { waitFor: MsgIsResPutWAL })); const results = await Promise.all(operations); results.forEach(res => { expect(MsgIsResPutWAL(res)).toBeTruthy(); }); }); });
🧹 Nitpick comments (6)
src/cloud/node-hono-server.ts (1)
23-34
: Enhance WebSocket connection management.Consider the following improvements:
- Move the
wsConnections
map inside the class to avoid global state.- Add connection validation before acceptance.
- Implement connection cleanup on close.
-const wsConnections = new Map<string, WebSocket>(); class NodeWSRoom implements WSRoom { + private readonly wsConnections = new Map<string, WebSocket>(); readonly sthis: SuperThis; constructor(sthis: SuperThis) { this.sthis = sthis; } acceptConnection(ws: WebSocket): void { + if (ws.readyState !== WebSocket.CONNECTING) { + throw new Error('Invalid WebSocket state'); + } const id = this.sthis.nextId(12).str; wsConnections.set(id, ws); ws.accept(); + ws.addEventListener('close', () => { + this.wsConnections.delete(id); + }); } }src/cloud/hono-server.ts (3)
43-43
: Consider usingundefined
instead ofvoid
in union typesUsing
void
in union types can be confusing as it represents the absence of a return value rather than a value type. Consider usingundefined
instead for better type safety.-export type ConnMiddleware = (conn: WSConnection, c: Context, next: Next) => Promise<Response | void>; +export type ConnMiddleware = (conn: WSConnection, c: Context, next: Next) => Promise<Response | undefined>;🧰 Tools
🪛 Biome (1.9.4)
[error] 43-43: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
160-160
: Consider usingundefined
instead ofvoid
in union typesUsing
void
in union types can be confusing as it represents the absence of a return value rather than a value type. Consider usingundefined
instead for better type safety.- inject(c: Context, fn: (rt: RunTimeParams) => Promise<Response | void>): Promise<Response | void>; + inject(c: Context, fn: (rt: RunTimeParams) => Promise<Response | undefined>): Promise<Response | undefined>;🧰 Tools
🪛 Biome (1.9.4)
[error] 160-160: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
[error] 160-160: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
217-239
: Add detailed logging for WebSocket message handlingGiven the reported issue with tests not recognizing data writes, consider adding detailed logging in the WebSocket message handler to help debug the storage backend integration:
onMessage: async (event, ws) => { + logger.Debug().Msg("Processing WebSocket message"); const rMsg = await exception2Result(async () => ende.decode(await top_uint8(event.data)) as MsgBase); if (rMsg.isErr()) { + logger.Error().Err(rMsg.Err()).Msg("Failed to decode WebSocket message"); ws.send( ende.encode( buildErrorMsg( sthis, logger, { message: event.data, } as ErrorMsg, rMsg.Err() ) ) ); } else { + logger.Debug().Interface("message", rMsg.Ok()).Msg("Dispatching WebSocket message"); await dp.dispatch(impl, rMsg.Ok(), (msg) => { const str = ende.encode(msg); + logger.Debug().Msg("Sending WebSocket response"); ws.send(str); return new Response(str); }); } },src/cloud/connection.test.ts (2)
79-79
: Refactor port selection logic.Move the random port selection logic to a helper function for better reusability and maintainability.
- const port = +(process.env.FP_WRANGLER_PORT || 0) || 1024 + Math.floor(Math.random() * (65536 - 1024)); + const port = getTestPort(); + +function getTestPort(): number { + const minPort = 1024; + const maxPort = 65535; + return +(process.env.FP_WRANGLER_PORT || 0) || minPort + Math.floor(Math.random() * (maxPort - minPort + 1)); +}
102-106
: Add configurable timeout and retry tests.The timeout test should use a configurable timeout value, and additional tests for connection retry scenarios should be added.
+ const DEFAULT_TIMEOUT = 5000; + it(`timeout`, async () => { - const rC = await applyStart(style.timeout.open()); + const rC = await applyStart(style.timeout.open(DEFAULT_TIMEOUT)); expect(rC.isErr()).toBeTruthy(); expect(rC.Err().message).toMatch(/Timeout/i); }); + + it(`retry succeeds`, async () => { + const rC = await applyStart(style.retry.open({ maxRetries: 3, retryDelay: 1000 })); + expect(rC.isOk()).toBeTruthy(); + });
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
src/cloud/backend/cf-dobj-abstract-sql.ts
(1 hunks)src/cloud/backend/cf-hono-server.ts
(1 hunks)src/cloud/backend/wrangler.toml
(1 hunks)src/cloud/connection.test.ts
(1 hunks)src/cloud/hono-server.ts
(1 hunks)src/cloud/meta-merger/bettersql-abstract-sql.ts
(1 hunks)src/cloud/node-hono-server.ts
(1 hunks)src/cloud/ws-room.ts
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- src/cloud/backend/cf-dobj-abstract-sql.ts
- src/cloud/meta-merger/bettersql-abstract-sql.ts
🧰 Additional context used
🪛 Biome (1.9.4)
src/cloud/node-hono-server.ts
[error] 51-51: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
[error] 51-51: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
src/cloud/hono-server.ts
[error] 43-43: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
[error] 160-160: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
[error] 160-160: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
src/cloud/backend/cf-hono-server.ts
[error] 65-65: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
[error] 65-65: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
[error] 95-95: Useless case clause.
because the default clause is present:
Unsafe fix: Remove the useless case.
(lint/complexity/noUselessSwitchCase)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Quality Checks
🔇 Additional comments (10)
src/cloud/ws-room.ts (1)
1-3
: LGTM! Well-defined interface.The
WSRoom
interface follows the Single Responsibility Principle and provides a clear contract for WebSocket connection acceptance.src/cloud/backend/wrangler.toml (1)
1-4
: Verify the compatibility date.The compatibility date is set to "2024-04-19", which is beyond the current date. Please ensure this is intentional and verify if there are any compatibility requirements for this specific date.
src/cloud/node-hono-server.ts (2)
37-40
: 🛠️ Refactor suggestionRemove non-null assertions and initialize properties.
The use of non-null assertions (!) is risky. Initialize these properties in the constructor or mark them as optional.
- _upgradeWebSocket!: UpgradeWebSocket; - _injectWebSocket!: (t: unknown) => void; - _serve!: serveFn; - _server!: ServerType; + private _upgradeWebSocket?: UpgradeWebSocket; + private _injectWebSocket?: (t: unknown) => void; + private _serve?: serveFn; + private _server?: ServerType;Likely invalid or redundant comment.
88-95
:⚠️ Potential issueAdd error handling for server startup.
The serve method lacks proper error handling and could leave the application in an inconsistent state if initialization fails.
async serve(app: Hono, port: number): Promise<void> { + if (!this._serve || !this._injectWebSocket) { + throw new Error('Server not initialized. Call start() first.'); + } await new Promise<void>((resolve) => { this._server = this._serve({ fetch: app.fetch, port }, () => { this._injectWebSocket(this._server); resolve(); }); }); }Likely invalid or redundant comment.
src/cloud/backend/cf-hono-server.ts (1)
179-195
: 🛠️ Refactor suggestionImplement proper WebSocket event handling.
The WebSocket event handlers need improvements:
- The onopen handler is commented out
- Error handling could be improved
- Message handling lacks try-catch
- // server.onopen = (ev) => { - // console.log("onopen", ev); - // wsEvents.onOpen?.(ev, wsCtx); - // } + server.onopen = (ev) => { + this.logger.Info().Msg("WebSocket connection opened"); + wsEvents.onOpen?.(ev, wsCtx); + }; server.onerror = (err) => { - // console.log("onerror", err); + this.logger.Error().Err(err).Msg("WebSocket error occurred"); wsEvents.onError?.(err, wsCtx); }; server.onclose = (ev) => { - // console.log("onclose", ev); + this.logger.Info().Msg("WebSocket connection closed"); wsEvents.onClose?.(ev, wsCtx); }; server.onmessage = (evt) => { - // console.log("onmessage", evt); - // wsCtx.send("Hellox from server"); - wsEvents.onMessage?.(evt, wsCtx); + try { + wsEvents.onMessage?.(evt, wsCtx); + } catch (error) { + this.logger.Error().Err(error).Msg("Error processing WebSocket message"); + wsEvents.onError?.(error, wsCtx); + } };Likely invalid or redundant comment.
src/cloud/hono-server.ts (4)
136-155
: Add input validation to calculatePreSignedUrl methodThe method should validate the input PreSignedMsg parameter and required environment variables before use.
167-173
: CORS configuration needs environment-specific settingsThe current CORS configuration uses permissive settings that are not environment-aware.
233-238
: Add error handling for message encodingThe message encoding process lacks error handling, which could lead to unhandled exceptions.
240-243
: Implement proper cleanup in WebSocket onClose handlerThe onClose handler only clears the dispatcher but doesn't properly clean up resources or log the connection closure.
src/cloud/connection.test.ts (1)
76-78
: Verify Docker environment compatibility.Since the PR transitions to a local Docker setup, ensure that the tests work correctly in the Docker environment.
✅ Verification successful
Docker environment compatibility verified ✓
The test infrastructure properly supports both Node and Cloudflare environments, with appropriate Docker configurations present in the
tests/
directory. The server factory implementation correctly handles environment-specific setups and configurations.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify Docker environment setup and test execution # Check if Docker setup scripts exist if ! fd -t f "start-(cloud|s3).sh" test/; then echo "Error: Docker setup scripts not found in test/ directory" exit 1 fi # Check if Docker-specific environment variables are documented rg -l "FP_WRANGLER_PORT|STORAGE_URL|ACCESS_KEY_ID|SECRET_ACCESS_KEY|REGION" "README.md" || \ echo "Warning: Docker environment variables not documented in README.md"Length of output: 262
Script:
#!/bin/bash # Search for Docker-related files and configuration echo "=== Docker Files ===" fd -t f "Dockerfile|docker-compose.yml" echo -e "\n=== Environment Setup ===" # Search for environment setup documentation rg -i "docker|environment|setup" "README.md" || echo "No setup instructions found in README.md" echo -e "\n=== Test Configuration ===" # Look for test configuration files fd -e json -e js -e ts "test.config|jest.config|vitest.config" echo -e "\n=== Server Factory Implementation ===" # Check server factory implementation rg -A 5 "NodeHonoServerFactory|CFHonoServerFactory" --type tsLength of output: 3579
src/cloud/backend/cf-hono-server.ts
Outdated
class CFWSRoom implements WSRoom { | ||
acceptConnection(ws: WebSocket): void { | ||
ws.accept(); | ||
// this._wsConnections.set(ws.id, ws); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Implement WebSocket connection tracking.
The commented code suggests that connection tracking was planned but not implemented. This could lead to resource leaks.
class CFWSRoom implements WSRoom {
+ private readonly _wsConnections = new Map<string, WebSocket>();
+
acceptConnection(ws: WebSocket): void {
ws.accept();
- // this._wsConnections.set(ws.id, ws);
+ const id = crypto.randomUUID();
+ this._wsConnections.set(id, ws);
+ ws.addEventListener('close', () => {
+ this._wsConnections.delete(id);
+ });
}
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
class CFWSRoom implements WSRoom { | |
acceptConnection(ws: WebSocket): void { | |
ws.accept(); | |
// this._wsConnections.set(ws.id, ws); | |
} | |
} | |
class CFWSRoom implements WSRoom { | |
private readonly _wsConnections = new Map<string, WebSocket>(); | |
acceptConnection(ws: WebSocket): void { | |
ws.accept(); | |
const id = crypto.randomUUID(); | |
this._wsConnections.set(id, ws); | |
ws.addEventListener('close', () => { | |
this._wsConnections.delete(id); | |
}); | |
} | |
} |
describe("Data", async () => { | ||
it("Get", async () => { | ||
const sp = sup(); | ||
const res = await conn.request(buildReqGetData(sthis, sp, gwCtx), { waitFor: MsgIsResGetData }); | ||
if (MsgIsResGetData(res)) { | ||
// expect(res.params).toEqual(sp); | ||
expect(URI.from(res.signedUrl).asObj()).toEqual(await refURL(res)); | ||
} else { | ||
assert.fail("expected MsgResGetData", JSON.stringify(res)); | ||
} | ||
}); | ||
it("Put", async () => { | ||
const sp = sup(); | ||
const res = await conn.request(buildReqPutData(sthis, sp, gwCtx), { waitFor: MsgIsResPutData }); | ||
if (MsgIsResPutData(res)) { | ||
// expect(res.params).toEqual(sp); | ||
expect(URI.from(res.signedUrl).asObj()).toEqual(await refURL(res)); | ||
} else { | ||
assert.fail("expected MsgResPutData", JSON.stringify(res)); | ||
} | ||
}); | ||
it("Del", async () => { | ||
const sp = sup(); | ||
const res = await conn.request(buildReqDelData(sthis, sp, gwCtx), { waitFor: MsgIsResDelData }); | ||
if (MsgIsResDelData(res)) { | ||
// expect(res.params).toEqual(sp); | ||
expect(URI.from(res.signedUrl).asObj()).toEqual(await refURL(res)); | ||
} else { | ||
assert.fail("expected MsgResDelData", JSON.stringify(res)); | ||
} | ||
}); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error scenario tests for data operations.
The Data operations tests only cover successful scenarios. Error cases should be tested.
Add tests for:
- Invalid paths
- Network errors
- Permission errors
- Concurrent operations
Example implementation:
describe("Data", async () => {
// Existing tests...
it("handles invalid paths", async () => {
const sp = { ...sup(), path: "../invalid/path" };
const res = await conn.request(buildReqGetData(sthis, sp, gwCtx),
{ waitFor: MsgIsResGetData });
expect(MsgIsError(res)).toBeTruthy();
expect(res.message).toMatch(/invalid path/i);
});
it("handles concurrent operations", async () => {
const sp = sup();
const operations = Array(5).fill(0).map(() =>
conn.request(buildReqPutData(sthis, sp, gwCtx),
{ waitFor: MsgIsResPutData }));
const results = await Promise.all(operations);
results.forEach(res => {
expect(MsgIsResPutData(res)).toBeTruthy();
});
});
});
this almost passes for me:
test output here https://gist.github.com/jchris/d1f3538fec0c16ef838e186763c4b51e |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
♻️ Duplicate comments (1)
src/cloud/meta-merger/meta-merger.test.ts (1)
18-20
:⚠️ Potential issueOptimize array concatenation performance.
Using spread operator in reduce has O(n²) time complexity. Use Array.prototype.push for better performance.
Apply this diff to optimize the implementation:
-function toCRDTEntries(rows: MetaConnection[]) { - return rows.reduce((r, i) => [...r, ...i.metas], [] as CRDTEntry[]); +function toCRDTEntries(rows: MetaConnection[]) { + const result: CRDTEntry[] = []; + for (const row of rows) { + result.push(...row.metas); + } + return result; +}🧰 Tools
🪛 Biome (1.9.4)
[error] 19-19: Avoid the use of spread (
...
) syntax on accumulators.Spread syntax should be avoided on accumulators (like those in
.reduce
) because it causes a time complexity ofO(n^2)
.
Consider methods such as .splice or .push instead.(lint/performance/noAccumulatingSpread)
🧹 Nitpick comments (10)
src/cloud/meta-merger/meta-merger.test.ts (3)
22-28
: Remove commented-out code.The
filterConnection
function is commented out and appears to be unused. If this code is no longer needed, it should be removed to maintain code cleanliness.
56-56
: Consider making the database path configurable.The database path is hardcoded to
'./dist/test.db'
. Consider making this configurable through environment variables or configuration files for better flexibility across different environments.
113-113
: Consider extracting the magic value into a constant.The value "MomRkYXRho" is repeated multiple times. Consider extracting it into a named constant to improve maintainability and clarify its purpose.
+const TEST_META_DATA = "MomRkYXRho"; // Then use TEST_META_DATA instead of the magic value in all locations
Also applies to: 137-137, 178-178, 224-224, 229-229
tests/Dockerfile.connect-netlify (1)
8-10
: Consider optimizing the copy order.The current order of operations might lead to files being overwritten. Consider reorganizing the COPY instructions to make the flow more explicit:
-COPY tests/connect-netlify/app /usr/src/app/tests/connect-netlify/app -COPY dist/netlify/server.js /usr/src/app/tests/connect-netlify/app/netlify/edge-functions/fireproof.js +# First copy the built server file +COPY dist/netlify/server.js /usr/src/app/tests/connect-netlify/app/netlify/edge-functions/fireproof.js +# Then copy the rest of the app files, excluding the edge-functions directory +COPY tests/connect-netlify/app /usr/src/app/tests/connect-netlify/appvitest.libsql.config.ts (1)
5-18
: Good test definition, consider additional coverage and concurrency aspects.
Your configuration is solid, but ensure that any concurrency changes and environment variable setups insetup.libsql.ts
do not interfere with parallel test runs. If you plan to collect coverage from these tests, you may need to add coverage configuration in this file or your global config.src/sql/v0.19/sqlite/libsql/sqlite-connection.ts (2)
23-25
: Validate concurrency approach using KeyedResolvOnce.
TheonceSQLiteConnections
offers a neat way to ensure a single connection per filename. Make sure that concurrency conditions (like multiple concurrent calls toconnect
) are handled gracefully, and that failing connections don’t remain cached in the resolver.
75-78
: Release the client after close.
Currently,_client
remains set even afterclose()
. Subsequent accesses toclient
below line 77 will throw. Consider nullifying_client
or ensuring any repeated calls toclose()
are gracefully handled.src/cloud/ws-room.ts (1)
1-5
: Document connection acceptance contract.
acceptConnection(ws, wse)
is a solid interface method. Including a short JSDoc comment describing any error handling, reconnect logic, or expected usage scenarios would improve clarity and maintainability.src/cloud/backend/env.d.ts (1)
8-52
: Maintain consistent and well-documented environment variables.
This interface declares numerous environment properties. Consider adding comments to clarify optional vs. required usage, fallback behaviors if they are missing, and which environment(s) they apply to (e.g., local Docker, CF Workers). This ensures better maintainability and reduces confusion for new contributors.src/cloud/node-hono-server.ts (1)
23-51
: Implement WebSocket connection cleanup.The WebSocket connections map lacks cleanup mechanism for closed connections.
const wsConnections = new Map<string, WebSocket>(); class NodeWSRoom implements WSRoom { readonly sthis: SuperThis; constructor(sthis: SuperThis) { this.sthis = sthis; } acceptConnection(ws: WebSocket, wse: WSEvents): Promise<void> { const id = this.sthis.nextId(12).str; wsConnections.set(id, ws); + ws.addEventListener('close', () => { + wsConnections.delete(id); + });
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (18)
.coderabbit.yaml
(1 hunks)package.json
(3 hunks)setup.libsql.ts
(1 hunks)setup.node-sqlite3-wasm.ts
(1 hunks)src/cloud/backend/cf-dobj-abstract-sql.ts
(1 hunks)src/cloud/backend/cf-hono-server.ts
(1 hunks)src/cloud/backend/env.d.ts
(1 hunks)src/cloud/backend/server.ts
(1 hunks)src/cloud/backend/wrangler.toml
(1 hunks)src/cloud/meta-merger/meta-merger.test.ts
(1 hunks)src/cloud/node-hono-server.ts
(1 hunks)src/cloud/ws-room.ts
(1 hunks)src/coerce-binary.ts
(1 hunks)src/sql/v0.19/sqlite/libsql/sqlite-connection.ts
(1 hunks)src/sql/v0.19/sqlite_factory.ts
(4 hunks)tests/Dockerfile.connect-netlify
(1 hunks)vitest.libsql.config.ts
(1 hunks)vitest.workspace.ts
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- setup.node-sqlite3-wasm.ts
🚧 Files skipped from review as they are similar to previous changes (6)
- .coderabbit.yaml
- vitest.workspace.ts
- src/cloud/backend/cf-dobj-abstract-sql.ts
- src/sql/v0.19/sqlite_factory.ts
- src/coerce-binary.ts
- package.json
🧰 Additional context used
🪛 Biome (1.9.4)
src/cloud/backend/cf-hono-server.ts
[error] 61-61: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
[error] 61-61: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
[error] 91-91: Useless case clause.
because the default clause is present:
Unsafe fix: Remove the useless case.
(lint/complexity/noUselessSwitchCase)
src/cloud/node-hono-server.ts
[error] 69-69: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
[error] 69-69: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
src/cloud/meta-merger/meta-merger.test.ts
[error] 19-19: Avoid the use of spread (...
) syntax on accumulators.
Spread syntax should be avoided on accumulators (like those in .reduce
) because it causes a time complexity of O(n^2)
.
Consider methods such as .splice or .push instead.
(lint/performance/noAccumulatingSpread)
🔇 Additional comments (11)
src/cloud/meta-merger/meta-merger.test.ts (2)
68-94
: Well-structured test setup with proper cleanup!The test setup is well-organized with:
- Proper database initialization
- Unique connection IDs for test isolation
- Consistent cleanup after each test
96-244
: Comprehensive test coverage with good edge cases!The test suite thoroughly covers:
- Empty metadata handling
- Single and multiple metadata insertions
- Metadata retrieval and deletion
- Edge cases and cleanup scenarios
tests/Dockerfile.connect-netlify (1)
10-10
: Verify source file existence and document the file rename.The copy operation assumes the existence of
dist/netlify/server.js
. Additionally, the renaming fromserver.js
tofireproof.js
needs documentation.Run this script to verify the source file and related configurations:
setup.libsql.ts (1)
1-2
: Import statement looks good.
No issues found with the import path or syntax.vitest.libsql.config.ts (1)
1-4
: Imports look correct.
No issues with importing Vitest configuration or thevite-tsconfig-paths
plugin.src/sql/v0.19/sqlite/libsql/sqlite-connection.ts (2)
9-21
: Ensure Node.js Buffer compatibility in your environment.
TheLS3Taste
class usesBuffer.from
andUint8Array.from
, which are Node.js APIs. If your runtime is something like Cloudflare Workers, these methods may not be natively available. Confirm that your target environment supports Node APIs, or consider usingTextEncoder
/TextDecoder
or another platform-agnostic approach.
34-37
: Leverage the “libsql” taste configuration.
The constructor extendsSqlite3Connection
withLS3Taste
. This looks appropriate for the “libsql” backend. Ensure that any additional parameters or initialization logic required bylibsql
is included in the constructor or subsequent lifecycle calls.src/cloud/backend/server.ts (3)
30-34
: Well-structured interface definition.The ExecSQLResult interface is clearly defined with appropriate readonly properties and types.
37-47
:⚠️ Potential issueAdd error handling to execSql method.
The SQL execution needs proper error handling and parameter validation.
async execSql(sql: string, params: unknown[]): Promise<ExecSQLResult> { + if (!sql) { + throw new Error('SQL query is required'); + } + if (!Array.isArray(params)) { + throw new Error('Parameters must be an array'); + } + try { const cursor = await this.ctx.storage.sql.exec(sql, ...params); const rawResults = cursor.toArray(); const res = { rowsRead: cursor.rowsRead, rowsWritten: cursor.rowsWritten, rawResults, }; return res; + } catch (error) { + throw new Error(`SQL execution failed: ${error instanceof Error ? error.message : 'Unknown error'}`); + } }Likely invalid or redundant comment.
1-4
:⚠️ Potential issueReplace Cloudflare Workers dependency with local implementation.
The import from "cloudflare:workers" needs to be replaced with a local implementation to align with the PR's objective of transitioning from Cloudflare to a local Docker setup.
✅ Verification successful
Remove Cloudflare DurableObject import and use Node.js implementation
The codebase already has a complete Node.js implementation using better-sqlite3 that implements the same SQLDatabase interface. Replace the Cloudflare DurableObject import with the BetterSQLDatabase implementation from
bettersql-abstract-sql.ts
.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check for alternative implementations that could replace Cloudflare Workers fd -t f ".*server.*\.ts$" src/cloud/Length of output: 157
Script:
#!/bin/bash # Check the implementation differences and DurableObject usage echo "=== node-hono-server.ts ===" cat src/cloud/node-hono-server.ts echo -e "\n=== cf-hono-server.ts ===" cat src/cloud/backend/cf-hono-server.ts echo -e "\n=== Searching for DurableObject usage ===" rg "DurableObject" -A 3Length of output: 21652
Script:
#!/bin/bash # Check for any remaining Cloudflare dependencies in Node.js implementation rg -l "cloudflare" src/cloud/node-hono-server.ts # Check if all required features are implemented in Node.js version echo "=== Checking SQL implementation ===" cat src/cloud/backend/meta-merger/abstract-sql.js || cat src/cloud/meta-merger/abstract-sql.jsLength of output: 373
Script:
#!/bin/bash # Find SQL implementation files echo "=== Finding SQL implementation files ===" fd -t f ".*sql.*\.(ts|js)$" src/cloud/ # Check imports in found files to understand the implementation structure echo -e "\n=== Checking SQLDatabase usage ===" rg "SQLDatabase" -A 3 src/cloud/Length of output: 12432
src/cloud/backend/cf-hono-server.ts (1)
18-25
:⚠️ Potential issueImprove type safety in getBackendDurableObject.
The function uses unsafe type casting and lacks proper error handling.
export function getBackendDurableObject(env: Env) { - const cfBackendKey = env.CF_BACKEND_KEY ?? "FP_BACKEND_DO"; - const rany = env as unknown as Record<string, DurableObjectNamespace<FPBackendDurableObject>>; - const dObjNs = rany[cfBackendKey]; - const id = dObjNs.idFromName(env.FP_BACKEND_DO_ID ?? cfBackendKey); - return dObjNs.get(id); + const cfBackendKey = env.CF_BACKEND_KEY ?? "FP_BACKEND_DO"; + if (!env[cfBackendKey]) { + throw new Error(`Missing required durable object: ${cfBackendKey}`); + } + const dObjNs = env[cfBackendKey] as DurableObjectNamespace<FPBackendDurableObject>; + if (!dObjNs || typeof dObjNs.idFromName !== 'function') { + throw new Error(`Invalid durable object namespace: ${cfBackendKey}`); + } + const id = dObjNs.idFromName(env.FP_BACKEND_DO_ID ?? cfBackendKey); + return dObjNs.get(id); }Likely invalid or redundant comment.
registerSqliteStoreProtocol(); | ||
process.env.FP_STORAGE_URL = "sqlite://dist/fp-dir-libsql?taste=libsql"; | ||
process.env.FP_KEYBAG_URL = "file://./dist/kb-dir-libsql"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Set environment variables before calling the protocol registration.
Currently, registerSqliteStoreProtocol()
is invoked before process.env.FP_STORAGE_URL
and process.env.FP_KEYBAG_URL
are set. If registerSqliteStoreProtocol()
depends on these environment variables, the setup may be incomplete or misconfigured at the time of registration.
Consider reordering these lines or passing the variables directly to the function.
Apply this diff to set the environment variables first:
import { registerSqliteStoreProtocol } from "./src/sql/gateway-sql.js";
-registerSqliteStoreProtocol();
process.env.FP_STORAGE_URL = "sqlite://dist/fp-dir-libsql?taste=libsql";
process.env.FP_KEYBAG_URL = "file://./dist/kb-dir-libsql";
+registerSqliteStoreProtocol();
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
registerSqliteStoreProtocol(); | |
process.env.FP_STORAGE_URL = "sqlite://dist/fp-dir-libsql?taste=libsql"; | |
process.env.FP_KEYBAG_URL = "file://./dist/kb-dir-libsql"; | |
process.env.FP_STORAGE_URL = "sqlite://dist/fp-dir-libsql?taste=libsql"; | |
process.env.FP_KEYBAG_URL = "file://./dist/kb-dir-libsql"; | |
registerSqliteStoreProtocol(); |
async connect(): Promise<void> { | ||
let fName = this.url.pathname; | ||
if (!fName) { | ||
throw this.logger.Error().Url(this.url).Msg("filename is empty").AsError(); | ||
} | ||
// const version = this.url.searchParams.get("version"); | ||
// if (!version) { | ||
// throw this.logger.Error().Str("url", this.url.toString()).Msg("version not found").AsError(); | ||
// } | ||
const hasName = this.url.getParam("name"); | ||
if (hasName) { | ||
fName = this.sthis.pathOps.join(fName, hasName); | ||
if (!fName.endsWith(".sqlite")) { | ||
fName += ".sqlite"; | ||
} | ||
} | ||
this.logger.Debug().Str("filename", fName).Msg("to-connect"); | ||
this._client = await onceSQLiteConnections.get(fName).once(async () => { | ||
this.logger.Debug().Str("filename", fName).Msg("connect"); | ||
const Sqlite3Database = await onceImport.once(async () => { | ||
const sql = await import("better-sqlite3"); | ||
return sql.default; | ||
}); | ||
if (hasName) { | ||
await (await this.fs()).mkdir(this.sthis.pathOps.dirname(fName), { recursive: true }); | ||
} | ||
const db = new Sqlite3Database(fName, { | ||
// verbose: console.log, | ||
nativeBinding: "./node_modules/better-sqlite3/build/Release/better_sqlite3.node", | ||
}); | ||
// this.logger.Debug().Any("client", this.client).Msg("connected") | ||
if (!db) { | ||
throw this.logger.Error().Msg("connect failed").AsError(); | ||
} | ||
return db; | ||
}); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Handle potential filesystem and native binding path errors.
- If directory creation fails (line 62), a thrown error could be more explicit by wrapping the original filesystem error.
- The hardcoded
nativeBinding
path on line 66 might break if the module folder structure changes. You could parameterize it or catch loading errors to ensure a more robust solution.
export default { | ||
fetch: async (req, env, ctx): Promise<Response> => { | ||
await honoServer.register(app); | ||
return app.fetch(req, env, ctx); | ||
}, | ||
} satisfies ExportedHandler<Env>; | ||
/* | ||
async fetch(req, env, _ctx): Promise<Response> { | ||
const id = env.FP_META_GROUPS.idFromName("fireproof"); | ||
const stub = env.FP_META_GROUPS.get(id); | ||
// eslint-disable-next-line @typescript-eslint/no-explicit-any | ||
return stub.fetch(req as any) as unknown as Promise<Response>; | ||
}, | ||
} satisfies ExportedHandler<Env>; | ||
*/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Remove or implement commented code.
The commented-out code block (lines 20-28) appears to be from the old Cloudflare implementation. Either implement it for the Docker setup or remove it to maintain code clarity.
export class FPRoomDurableObject extends DurableObject<Env> { | ||
private wsEvents?: WSEvents; | ||
|
||
async acceptWebSocket(ws: WebSocket, wsEvents: WSEvents): Promise<void> { | ||
this.ctx.acceptWebSocket(ws); | ||
this.wsEvents = wsEvents; | ||
} | ||
|
||
webSocketError(ws: WebSocket, error: unknown): void | Promise<void> { | ||
const wsCtx = new WSContext(ws as WSContextInit); | ||
this.wsEvents?.onError?.(error as Event, wsCtx); | ||
} | ||
|
||
async webSocketMessage(ws: WebSocket, msg: string | ArrayBuffer): Promise<void> { | ||
const wsCtx = new WSContext(ws as WSContextInit); | ||
this.wsEvents?.onMessage?.({ data: msg } as MessageEvent, wsCtx); | ||
} | ||
|
||
webSocketClose(ws: WebSocket, code: number, reason: string): void | Promise<void> { | ||
const wsCtx = new WSContext(ws as WSContextInit); | ||
this.wsEvents?.onClose?.({ code, reason } as CloseEvent, wsCtx); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance WebSocket error handling and event management.
The WebSocket implementation needs improvements:
- Add proper error handling in webSocketMessage
- Add connection cleanup in webSocketClose
- Add logging for better debugging
async webSocketMessage(ws: WebSocket, msg: string | ArrayBuffer): Promise<void> {
const wsCtx = new WSContext(ws as WSContextInit);
- this.wsEvents?.onMessage?.({ data: msg } as MessageEvent, wsCtx);
+ try {
+ await this.wsEvents?.onMessage?.({ data: msg } as MessageEvent, wsCtx);
+ } catch (error) {
+ this.wsEvents?.onError?.(error, wsCtx);
+ }
}
webSocketClose(ws: WebSocket, code: number, reason: string): void | Promise<void> {
const wsCtx = new WSContext(ws as WSContextInit);
- this.wsEvents?.onClose?.({ code, reason } as CloseEvent, wsCtx);
+ try {
+ this.wsEvents?.onClose?.({ code, reason } as CloseEvent, wsCtx);
+ } catch (e) {
+ // Log error but don't rethrow as the connection is already closing
+ console.error('Error during WebSocket close:', e);
+ }
+ this.wsEvents = undefined; // Cleanup
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
export class FPRoomDurableObject extends DurableObject<Env> { | |
private wsEvents?: WSEvents; | |
async acceptWebSocket(ws: WebSocket, wsEvents: WSEvents): Promise<void> { | |
this.ctx.acceptWebSocket(ws); | |
this.wsEvents = wsEvents; | |
} | |
webSocketError(ws: WebSocket, error: unknown): void | Promise<void> { | |
const wsCtx = new WSContext(ws as WSContextInit); | |
this.wsEvents?.onError?.(error as Event, wsCtx); | |
} | |
async webSocketMessage(ws: WebSocket, msg: string | ArrayBuffer): Promise<void> { | |
const wsCtx = new WSContext(ws as WSContextInit); | |
this.wsEvents?.onMessage?.({ data: msg } as MessageEvent, wsCtx); | |
} | |
webSocketClose(ws: WebSocket, code: number, reason: string): void | Promise<void> { | |
const wsCtx = new WSContext(ws as WSContextInit); | |
this.wsEvents?.onClose?.({ code, reason } as CloseEvent, wsCtx); | |
} | |
} | |
export class FPRoomDurableObject extends DurableObject<Env> { | |
private wsEvents?: WSEvents; | |
async acceptWebSocket(ws: WebSocket, wsEvents: WSEvents): Promise<void> { | |
this.ctx.acceptWebSocket(ws); | |
this.wsEvents = wsEvents; | |
} | |
webSocketError(ws: WebSocket, error: unknown): void | Promise<void> { | |
const wsCtx = new WSContext(ws as WSContextInit); | |
this.wsEvents?.onError?.(error as Event, wsCtx); | |
} | |
async webSocketMessage(ws: WebSocket, msg: string | ArrayBuffer): Promise<void> { | |
const wsCtx = new WSContext(ws as WSContextInit); | |
try { | |
await this.wsEvents?.onMessage?.({ data: msg } as MessageEvent, wsCtx); | |
} catch (error) { | |
this.wsEvents?.onError?.(error, wsCtx); | |
} | |
} | |
webSocketClose(ws: WebSocket, code: number, reason: string): void | Promise<void> { | |
const wsCtx = new WSContext(ws as WSContextInit); | |
try { | |
this.wsEvents?.onClose?.({ code, reason } as CloseEvent, wsCtx); | |
} catch (e) { | |
// Log error but don't rethrow as the connection is already closing | |
console.error('Error during WebSocket close:', e); | |
} | |
this.wsEvents = undefined; // Cleanup | |
} | |
} |
STORAGE_URL = "http://localhost:9000/testbucket" | ||
ACCESS_KEY_ID = "minioadmin" | ||
SECRET_ACCESS_KEY = "minioadmin" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Security: Avoid hardcoded credentials in configuration.
The configuration contains hardcoded credentials (minioadmin) across multiple environments. This is a security risk, even for testing.
Use environment variables instead:
-ACCESS_KEY_ID = "minioadmin"
-SECRET_ACCESS_KEY = "minioadmin"
+ACCESS_KEY_ID = "${MINIO_ACCESS_KEY}"
+SECRET_ACCESS_KEY = "${MINIO_SECRET_KEY}"
Also applies to: 67-69, 88-90, 106-108, 128-130
[[env.test.d1_databases]] | ||
binding = "FP_BACKEND_D1" | ||
database_name = "test-meta-merge" | ||
database_id = "b8b452ed-b1d9-478f-b56c-c5e4545f157a" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use unique database IDs for different environments.
The same database ID is used across different environments, which could lead to data conflicts.
Each environment should have its own unique database ID:
-database_id = "b8b452ed-b1d9-478f-b56c-c5e4545f157a"
+database_id = "${ENV_SPECIFIC_DB_ID}"
Also applies to: 75-78, 120-123
// getDurableObject(conn: Connection) { | ||
// const id = env.FP_META_GROUPS.idFromName("fireproof"); | ||
// const stub = env.FP_META_GROUPS.get(id); | ||
// } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Remove commented-out code.
Remove the commented-out code blocks that are no longer needed. If they contain important logic that needs to be preserved, convert them into TODO comments with clear explanations.
Also applies to: 160-165
This should move our tests from cloudflare to local docker.
There are two docker container needed:
bash test/start-cloud.sh
bash test/start-s3.sh
on macos you need something like colima
and the tests
pnpm run test --project cloud
are not recognizing that nothing is written to the storage backend
Summary by CodeRabbit
Release Notes
New Features
MetaByTenantLedgerRow
interface andMetaByTenantLedgerSql
class for managing tenant ledger data.TenantRow
interface andTenantSql
class for managing tenant data in a SQLite database.MsgRawConnectionBase
class to streamline error handling in connections.buildMsgDispatcher
function for structured message dispatching.CFWorkerSQLStatement
andCFWorkerSQLDatabase
classes for managing SQL operations in Cloudflare Workers.WSRoom
interface for WebSocket connection management.V0_19LS3Connection
class for SQLite connections using thelibsql
client.Improvements
BetterSQLStatement
andBetterSQLDatabase
classes.UCANGateway
class to streamline proof retrieval and enhance subscription capabilities.Infrastructure
create-schema-cli.ts
for schema management in metadata operations.