You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The concrete behaviour of this implementation may diverge from that of the reference impl (that lives in https://github.com/bluesky-social/atproto), sometimes on purpose, sometimes accidentally, sometimes in violation of the spec and sometimes in stricter conformance to it. I'll try to write them all down here. (NB: I've avoided looking at the reference impl too closely, some of these are currently "I think I diverge" rather than "I know I diverge")
uploadBlob doesn't care what the mime type is and will reflect back whatever you say it is in the response. You can reference the same blob with multiple mime types. getBlob will unconditionally return application/octet-stream at the HTTP level. Blobs should not have an associated mime-type bluesky-social/atproto#1213. Also, getBlob returns a Content-Disposition header, specifying a download file name in the format <cid>.bin
Lexicons are never validated (I would like to change this at some point! (it's just not a priority))
There is no blob size limit nor record size limit (I should implement a record size limit though - it's a DoS vector since the record and its parse tree must fit in RAM - but my uploadBlob/getBlob is streamed so it should be fine) (I could avoid record size limits with 100% streamed parsing/serialisation a la https://github.com/DavidBuchanan314/dag-sqlite)
Potentially-higher maximum record object nesting depth. I take care to process records non-recursively, however at present I'm using the built-in json parser/serialiser which inherits the usual limitations at the HTTP API level - TODO: write my own non-recursive JSON library!!! Why am I doing this? just for bragging rights really, I want to be able to process technically-valid records that the reference impl cannot.
To work around aforementioned JSON API limitations, the repo write APIs also support record values encoded as base64-encoded DAG-CBOR.
The service (optionally) runs under a UNIX-domain socket as opposed to TCP, which in theory is more secure/efficient if you're planning on hosting it behind a reverse-proxy on the same box, like nginx (and this is the officially-documented way of deploying the service).
We use p256 keys by default, not k256 (I'd like to add support for k256 too although it probably won't be the default done - both key types are supported)
We don't implement any APIs marked as "deprecated"
There are no "app passwords", while oauth is TODO
There is no "read after write" logic. I think that's a bit of a kludge and I'm waiting for an improved solution, whatever that entails. In the meantime, things feel a little janky because you don't get to see your writes straight away.
The text was updated successfully, but these errors were encountered:
The concrete behaviour of this implementation may diverge from that of the reference impl (that lives in https://github.com/bluesky-social/atproto), sometimes on purpose, sometimes accidentally, sometimes in violation of the spec and sometimes in stricter conformance to it. I'll try to write them all down here. (NB: I've avoided looking at the reference impl too closely, some of these are currently "I think I diverge" rather than "I know I diverge")
applyWrites supports
swapRecord
Should `applyWrites` offer `swapRecord` options for update and deletion ops? bluesky-social/atproto#2003uploadBlob doesn't care what the mime type is and will reflect back whatever you say it is in the response. You can reference the same blob with multiple mime types. getBlob will unconditionally return
application/octet-stream
at the HTTP level. Blobs should not have an associated mime-type bluesky-social/atproto#1213. Also,getBlob
returns aContent-Disposition
header, specifying a download file name in the format<cid>.bin
Lexicons are never validated (I would like to change this at some point! (it's just not a priority))
There is no blob size limit nor record size limit (I should implement a record size limit though - it's a DoS vector since the record and its parse tree must fit in RAM - but my uploadBlob/getBlob is streamed so it should be fine) (I could avoid record size limits with 100% streamed parsing/serialisation a la https://github.com/DavidBuchanan314/dag-sqlite)
Potentially-higher maximum record object nesting depth. I take care to process records non-recursively, however at present I'm using the built-in json parser/serialiser which inherits the usual limitations at the HTTP API level - TODO: write my own non-recursive JSON library!!! Why am I doing this? just for bragging rights really, I want to be able to process technically-valid records that the reference impl cannot.
To work around aforementioned JSON API limitations, the repo write APIs also support record values encoded as base64-encoded DAG-CBOR.
The service (optionally) runs under a UNIX-domain socket as opposed to TCP, which in theory is more secure/efficient if you're planning on hosting it behind a reverse-proxy on the same box, like nginx (and this is the officially-documented way of deploying the service).
We use p256 keys by default, not k256 (
I'd like to add support for k256 too although it probably won't be the defaultdone - both key types are supported)We don't implement any APIs marked as "deprecated"
There are no "app passwords", while oauth is TODO
There is no "read after write" logic. I think that's a bit of a kludge and I'm waiting for an improved solution, whatever that entails. In the meantime, things feel a little janky because you don't get to see your writes straight away.
The text was updated successfully, but these errors were encountered: