Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implementation Divergences #12

Open
DavidBuchanan314 opened this issue Nov 15, 2024 · 0 comments
Open

Implementation Divergences #12

DavidBuchanan314 opened this issue Nov 15, 2024 · 0 comments

Comments

@DavidBuchanan314
Copy link
Owner

DavidBuchanan314 commented Nov 15, 2024

The concrete behaviour of this implementation may diverge from that of the reference impl (that lives in https://github.com/bluesky-social/atproto), sometimes on purpose, sometimes accidentally, sometimes in violation of the spec and sometimes in stricter conformance to it. I'll try to write them all down here. (NB: I've avoided looking at the reference impl too closely, some of these are currently "I think I diverge" rather than "I know I diverge")

  • applyWrites supports swapRecord Should `applyWrites` offer `swapRecord` options for update and deletion ops? bluesky-social/atproto#2003

  • uploadBlob doesn't care what the mime type is and will reflect back whatever you say it is in the response. You can reference the same blob with multiple mime types. getBlob will unconditionally return application/octet-stream at the HTTP level. Blobs should not have an associated mime-type bluesky-social/atproto#1213. Also, getBlob returns a Content-Disposition header, specifying a download file name in the format <cid>.bin

  • Lexicons are never validated (I would like to change this at some point! (it's just not a priority))

  • There is no blob size limit nor record size limit (I should implement a record size limit though - it's a DoS vector since the record and its parse tree must fit in RAM - but my uploadBlob/getBlob is streamed so it should be fine) (I could avoid record size limits with 100% streamed parsing/serialisation a la https://github.com/DavidBuchanan314/dag-sqlite)

  • Potentially-higher maximum record object nesting depth. I take care to process records non-recursively, however at present I'm using the built-in json parser/serialiser which inherits the usual limitations at the HTTP API level - TODO: write my own non-recursive JSON library!!! Why am I doing this? just for bragging rights really, I want to be able to process technically-valid records that the reference impl cannot.

  • To work around aforementioned JSON API limitations, the repo write APIs also support record values encoded as base64-encoded DAG-CBOR.

  • The service (optionally) runs under a UNIX-domain socket as opposed to TCP, which in theory is more secure/efficient if you're planning on hosting it behind a reverse-proxy on the same box, like nginx (and this is the officially-documented way of deploying the service).

  • We use p256 keys by default, not k256 (I'd like to add support for k256 too although it probably won't be the default done - both key types are supported)

  • We don't implement any APIs marked as "deprecated"

  • There are no "app passwords", while oauth is TODO

  • There is no "read after write" logic. I think that's a bit of a kludge and I'm waiting for an improved solution, whatever that entails. In the meantime, things feel a little janky because you don't get to see your writes straight away.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant