Joystream storage node.
- Colossus
- Description
- Installation
- Ubuntu Linux
- Install packages required for installation
- Clone the code repository
- Install volta
- Install project dependencies and build it
- Verify installation
- Usage
- CLI Commands
The main responsibility of Colossus is handling media data for users. The data could be images, audio, or video files. Colossus receives uploads and saves files in the local folder, registers uploads in the blockchain, and later serves files to Argus nodes (distribution nodes). Colossus instances spread the data using peer-to-peer synchronization. Data management is blockchain-based, it relies on the concepts of buckets, bags, data objects. The full description of the blockchain smart contracts could be found here.
Colossus provides REST API for its clients and other Colossus instances. It's based on the OpenAPI Specification v3. Here is the complete spec (work in progress).
API endpoints:
- files
- get - get the data file by its ID
- head - get the data file headers by its ID
- post - upload a file
- state
- version - Colossus version and system environment
- all data objects IDs
- data objects IDs for bag
- data statistics - total data folder size and data object number
There is a command-line interface to manage Storage Working Group operations like create a bucket or change storage settings. Full description could be found below.
There are several groups of command:
- leader - manages the Storage Working group in the blockchain. Requires leader privileges.
- operator - Storage Provider group - it manages data object uploads and storage provider metadata(endpoint is the most important). Requires active Storage Working group membership.
- dev - development support commands. Requires development blockchain setup with Alice account.
- ungroupped - server and help commands.
server
starts Colossus andhelp
shows the full command list.
The storage provider should provide metadata for Colossus instance to be discoverable by other Colossus or Argus (distributor node) instances. At the very least an endpoint should be registered in the blockchain. For some complex scenarios, Colossus should provide its geolocation.
Metadata could be registered using operator:set-metadata command.
A simple endpoint could be set using the --endpoint
flag of the command. Complex metadata requires JSON file (example).
JSON file format based on the protobuf format described here.
Colossus accepts files using its API. The data must be uploaded using POST http method with multipart/form-data
.
Simplified process (file uploading):
- accepting the data upload in the temp folder
- data hash & size verification
- moving the data to the data folder
- registering the data object as
accepted
in the blockchain
Several instances of Colossus should contain the data replica in order to provide some degree of reliability.
When some Colossus instance receives the data upload it marks the related data object as accepted
.
Other instances that have the same obligations to store the data (they serve storage buckets assigned to the same bag)
will eventually load this data object from the initial receiver (or some other node that already downloaded a new
data object from the initial receiver) using REST API.
The actual data distribution (serving to end-users) is done via Argus - the distributor node. It gets data from Colossus using the same get
endpoint on a single data object basis.
- Colossus relies on the Query Node (Hydra) to get the blockchain data in a structured form.
- Using Colossus as a functioning Storage Provider requires providing account URI or key file and password of a transactor account associated with the assigned storage bucket, as well as active
WorkerId
from the Storage Working group.
# Ubuntu Linux
# Install packages required for installation
apt update
apt install git curl
# Clone the code repository
git clone https://github.com/Joystream/joystream
cd joystream
# Install volta
curl https://get.volta.sh | bash
bash
# Install project dependencies and build it
yarn
yarn workspace @joystream/types build
yarn workspace @joystream/metadata-protobuf build
yarn workspace storage-node build
# Verify installation
cd storage-node
yarn storage-node version
$ yarn storage-node server --apiUrl ws://localhost:9944 -w 0 --accountUri //Alice -q localhost:4352 -o 3333 -d ~/uploads --sync
- accountURI or keyfile and password of the transactor account
- workerId from the Storage working group that matches with the transactor account above
- Joystream node websocket endpoint URL
- QueryNode URL
- (optional) ElasticSearch URL
- created directory for data uploading
Full command description could be find below.
There is also an option to run Colossus as Docker container.
storage-node archive
storage-node help [COMMAND]
storage-node leader:cancel-invite
storage-node leader:create-bucket
storage-node leader:delete-bucket
storage-node leader:invite-operator
storage-node leader:remove-operator
storage-node leader:set-bucket-limits
storage-node leader:set-global-uploading-status
storage-node leader:update-bag-limit
storage-node leader:update-bags
storage-node leader:update-blacklist
storage-node leader:update-bucket-status
storage-node leader:update-data-fee
storage-node leader:update-data-object-bloat-bond
storage-node leader:update-dynamic-bag-policy
storage-node leader:update-voucher-limits
storage-node operator:accept-invitation
storage-node operator:set-metadata
storage-node server
storage-node util:cleanup
storage-node util:fetch-bucket
storage-node util:multihash
storage-node util:verify-bag-id
Starts running in a write-only, archive mode (no external API exposed). Downloads, compresses and uploads all assigned data objects to a specified S3 bucket.
USAGE
$ storage-node archive
OPTIONS
-b, --buckets=buckets
[default: 1] Comma separated list of bucket IDs to sync. Buckets that are not assigned to
worker are ignored.
If not specified all buckets will be synced.
-e, --elasticSearchEndpoint=elasticSearchEndpoint
Elasticsearch endpoint (e.g.: http://some.com:8081).
Log level could be set using the ELASTIC_LOG_LEVEL environment variable.
Supported values: warn, error, debug, info. Default:debug
-h, --help
show CLI help
-i, --syncInterval=syncInterval
[default: 20] Interval between synchronizations (in minutes)
-k, --keyFile=keyFile
Path to key file to add to the keyring.
-l, --logFilePath=logFilePath
Absolute path to the rolling log files.
-m, --dev
Use development mode
-n, --logMaxFileNumber=logMaxFileNumber
[default: 7] Maximum rolling log files number.
-p, --password=password
Password to unlock keyfiles. Multiple passwords can be passed, to try against all files. If
not specified a single password can be set in ACCOUNT_PWD environment variable.
-q, --storageSquidEndpoint=storageSquidEndpoint
(required) [default: http://localhost:4352/graphql] Storage Squid graphql server endpoint
(e.g.: http://some.com:4352/graphql)
-r, --syncWorkersNumber=syncWorkersNumber
[default: 8] Sync workers number (max async operations in progress).
-t, --syncWorkersTimeout=syncWorkersTimeout
[default: 30] Asset downloading timeout for the syncronization (in minutes).
-u, --apiUrl=apiUrl
[default: ws://localhost:9944] Runtime API URL. Mandatory in non-dev environment.
-w, --worker=worker
(required) Storage provider worker ID
-x, --logMaxFileSize=logMaxFileSize
[default: 50000000] Maximum rolling log files size in bytes.
-y, --accountUri=accountUri
Account URI (optional). If not specified a single key can be set in ACCOUNT_URI environment
variable.
-z, --logFileChangeFrequency=(yearly|monthly|daily|hourly|none)
[default: daily] Log files update frequency.
--archiveFileSizeLimitMB=archiveFileSizeLimitMB
[default: 1000] Try to avoid creating archive files larger than this size limit (in MB)
unless necessary
--archiveTrackfileBackupFreqMinutes=archiveTrackfileBackupFreqMinutes
[default: 60] Determines how frequently the archive tracking file (containing information
about .7z files content) should be uploaded to S3 in case a change is detected.
--awsS3BucketName=awsS3BucketName
(required) Name of the AWS S3 bucket where the files will be stored.
--awsS3BucketRegion=awsS3BucketRegion
(required) AWS region of the AWS S3 bucket where the files will be stored.
--elasticSearchIndexPrefix=elasticSearchIndexPrefix
[default: logs-colossus] Elasticsearch index prefix. Node ID will be appended to the prefix.
Default: logs-colossus. Can be passed through ELASTIC_INDEX_PREFIX environment variable.
--elasticSearchPassword=elasticSearchPassword
Elasticsearch password for basic authentication. Can be passed through ELASTIC_PASSWORD
environment variable.
--elasticSearchUser=elasticSearchUser
Elasticsearch user for basic authentication. Can be passed through ELASTIC_USER environment
variable.
--keyStore=keyStore
Path to a folder with multiple key files to load into keystore.
--localAgeTriggerThresholdMinutes=localAgeTriggerThresholdMinutes
[default: 1440] Compress and upload local data objects to S3 if the oldest of them was
downloaded more than X minutes ago
--localCountTriggerThreshold=localCountTriggerThreshold
Compress and upload local data objects to S3 if the number of them reaches this threshold.
--localSizeTriggerThresholdMB=localSizeTriggerThresholdMB
[default: 10000] Compress and upload local data objects to S3 if the combined size of them
reaches this threshold (in MB)
--tmpDownloadDir=tmpDownloadDir
(required) Directory to store tempory files during sync (absolute path).
--uploadQueueDir=uploadQueueDir
(required) Directory to store fully downloaded data objects before compressing them and
uploading to S3 (absolute path).
--uploadQueueDirSizeLimitMB=uploadQueueDirSizeLimitMB
(required) [default: 20000] Limits the total size of files stored in upload queue directory
(in MB). Download of the new objects will be limitted in order to prevent exceeding this
limit. To leave a safe margin of error, it should be set to ~50% of available disk space.
--uploadRetryInterval=uploadRetryInterval
[default: 3] Interval before retrying failed upload (in minutes)
--uploadWorkersNumber=uploadWorkersNumber
[default: 4] Upload workers number (max async operations in progress).
See code: src/commands/archive.ts
display help for storage-node
USAGE
$ storage-node help [COMMAND]
ARGUMENTS
COMMAND command to show help for
OPTIONS
--all see all commands in CLI
See code: @oclif/plugin-help
Cancel a storage bucket operator invite. Requires storage working group leader permissions.
USAGE
$ storage-node leader:cancel-invite
OPTIONS
-h, --help show CLI help
-i, --bucketId=bucketId (required) Storage bucket ID
-k, --keyFile=keyFile Path to key file to add to the keyring.
-m, --dev Use development mode
-p, --password=password Password to unlock keyfiles. Multiple passwords can be passed, to
try against all files. If not specified a single password can be
set in ACCOUNT_PWD environment variable.
-u, --apiUrl=apiUrl [default: ws://localhost:9944] Runtime API URL. Mandatory in
non-dev environment.
-y, --accountUri=accountUri Account URI (optional). If not specified a single key can be set
in ACCOUNT_URI environment variable.
--keyStore=keyStore Path to a folder with multiple key files to load into keystore.
See code: src/commands/leader/cancel-invite.ts
Create new storage bucket. Requires storage working group leader permissions.
USAGE
$ storage-node leader:create-bucket
OPTIONS
-a, --allow Accepts new bags
-h, --help show CLI help
-i, --invited=invited Invited storage operator ID (storage WG worker ID)
-k, --keyFile=keyFile Path to key file to add to the keyring.
-m, --dev Use development mode
-n, --number=number Storage bucket max total objects number
-p, --password=password Password to unlock keyfiles. Multiple passwords can be passed, to
try against all files. If not specified a single password can be
set in ACCOUNT_PWD environment variable.
-s, --size=size Storage bucket max total objects size
-u, --apiUrl=apiUrl [default: ws://localhost:9944] Runtime API URL. Mandatory in
non-dev environment.
-y, --accountUri=accountUri Account URI (optional). If not specified a single key can be set
in ACCOUNT_URI environment variable.
--keyStore=keyStore Path to a folder with multiple key files to load into keystore.
See code: src/commands/leader/create-bucket.ts
Delete a storage bucket. Requires storage working group leader permissions.
USAGE
$ storage-node leader:delete-bucket
OPTIONS
-h, --help show CLI help
-i, --bucketId=bucketId (required) Storage bucket ID
-k, --keyFile=keyFile Path to key file to add to the keyring.
-m, --dev Use development mode
-p, --password=password Password to unlock keyfiles. Multiple passwords can be passed, to
try against all files. If not specified a single password can be
set in ACCOUNT_PWD environment variable.
-u, --apiUrl=apiUrl [default: ws://localhost:9944] Runtime API URL. Mandatory in
non-dev environment.
-y, --accountUri=accountUri Account URI (optional). If not specified a single key can be set
in ACCOUNT_URI environment variable.
--keyStore=keyStore Path to a folder with multiple key files to load into keystore.
See code: src/commands/leader/delete-bucket.ts
Invite a storage bucket operator. Requires storage working group leader permissions.
USAGE
$ storage-node leader:invite-operator
OPTIONS
-h, --help show CLI help
-i, --bucketId=bucketId (required) Storage bucket ID
-k, --keyFile=keyFile Path to key file to add to the keyring.
-m, --dev Use development mode
-p, --password=password Password to unlock keyfiles. Multiple passwords can be passed, to
try against all files. If not specified a single password can be
set in ACCOUNT_PWD environment variable.
-u, --apiUrl=apiUrl [default: ws://localhost:9944] Runtime API URL. Mandatory in
non-dev environment.
-w, --operatorId=operatorId (required) Storage bucket operator ID (storage group worker ID)
-y, --accountUri=accountUri Account URI (optional). If not specified a single key can be set
in ACCOUNT_URI environment variable.
--keyStore=keyStore Path to a folder with multiple key files to load into keystore.
See code: src/commands/leader/invite-operator.ts
Remove a storage bucket operator. Requires storage working group leader permissions.
USAGE
$ storage-node leader:remove-operator
OPTIONS
-h, --help show CLI help
-i, --bucketId=bucketId (required) Storage bucket ID
-k, --keyFile=keyFile Path to key file to add to the keyring.
-m, --dev Use development mode
-p, --password=password Password to unlock keyfiles. Multiple passwords can be passed, to
try against all files. If not specified a single password can be
set in ACCOUNT_PWD environment variable.
-u, --apiUrl=apiUrl [default: ws://localhost:9944] Runtime API URL. Mandatory in
non-dev environment.
-y, --accountUri=accountUri Account URI (optional). If not specified a single key can be set
in ACCOUNT_URI environment variable.
--keyStore=keyStore Path to a folder with multiple key files to load into keystore.
See code: src/commands/leader/remove-operator.ts
Set VoucherObjectsSizeLimit and VoucherObjectsNumberLimit for the storage bucket.
USAGE
$ storage-node leader:set-bucket-limits
OPTIONS
-h, --help show CLI help
-i, --bucketId=bucketId (required) Storage bucket ID
-k, --keyFile=keyFile Path to key file to add to the keyring.
-m, --dev Use development mode
-o, --objects=objects (required) New 'voucher object number limit' value
-p, --password=password Password to unlock keyfiles. Multiple passwords can be passed, to
try against all files. If not specified a single password can be
set in ACCOUNT_PWD environment variable.
-s, --size=size (required) New 'voucher object size limit' value
-u, --apiUrl=apiUrl [default: ws://localhost:9944] Runtime API URL. Mandatory in
non-dev environment.
-y, --accountUri=accountUri Account URI (optional). If not specified a single key can be set
in ACCOUNT_URI environment variable.
--keyStore=keyStore Path to a folder with multiple key files to load into keystore.
See code: src/commands/leader/set-bucket-limits.ts
Set global uploading block. Requires storage working group leader permissions.
USAGE
$ storage-node leader:set-global-uploading-status
OPTIONS
-h, --help show CLI help
-k, --keyFile=keyFile Path to key file to add to the keyring.
-m, --dev Use development mode
-p, --password=password Password to unlock keyfiles. Multiple passwords can be passed, to
try against all files. If not specified a single password can be
set in ACCOUNT_PWD environment variable.
-s, --set=(on|off) (required) Sets global uploading block (on/off).
-u, --apiUrl=apiUrl [default: ws://localhost:9944] Runtime API URL. Mandatory in
non-dev environment.
-y, --accountUri=accountUri Account URI (optional). If not specified a single key can be set
in ACCOUNT_URI environment variable.
--keyStore=keyStore Path to a folder with multiple key files to load into keystore.
See code: src/commands/leader/set-global-uploading-status.ts
Update StorageBucketsPerBagLimit variable in the Joystream node storage.
USAGE
$ storage-node leader:update-bag-limit
OPTIONS
-h, --help show CLI help
-k, --keyFile=keyFile Path to key file to add to the keyring.
-l, --limit=limit (required) New StorageBucketsPerBagLimit value
-m, --dev Use development mode
-p, --password=password Password to unlock keyfiles. Multiple passwords can be passed, to
try against all files. If not specified a single password can be
set in ACCOUNT_PWD environment variable.
-u, --apiUrl=apiUrl [default: ws://localhost:9944] Runtime API URL. Mandatory in
non-dev environment.
-y, --accountUri=accountUri Account URI (optional). If not specified a single key can be set
in ACCOUNT_URI environment variable.
--keyStore=keyStore Path to a folder with multiple key files to load into keystore.
See code: src/commands/leader/update-bag-limit.ts
Add/remove a storage bucket/s from a bag/s. If multiple bags are provided, then the same input bucket ID/s would be added/removed from all bags.
USAGE
$ storage-node leader:update-bags
OPTIONS
-a, --add=add
[default: ] Comma separated list of bucket IDs to add to all bag/s
-h, --help
show CLI help
-i, --bagIds=bagIds
(required) Bag ID. Format: {bag_type}:{sub_type}:{id}.
- Bag types: 'static', 'dynamic'
- Sub types: 'static:council', 'static:wg', 'dynamic:member', 'dynamic:channel'
- Id:
- absent for 'static:council'
- working group name for 'static:wg'
- integer for 'dynamic:member' and 'dynamic:channel'
Examples:
- static:council
- static:wg:storage
- dynamic:member:4
-k, --keyFile=keyFile
Path to key file to add to the keyring.
-m, --dev
Use development mode
-p, --password=password
Password to unlock keyfiles. Multiple passwords can be passed, to try against all files. If
not specified a single password can be set in ACCOUNT_PWD environment variable.
-r, --remove=remove
[default: ] Comma separated list of bucket IDs to remove from all bag/s
-s, --updateStrategy=(atomic|force)
[default: atomic] Update strategy to use. Either "atomic" or "force".
-u, --apiUrl=apiUrl
[default: ws://localhost:9944] Runtime API URL. Mandatory in non-dev environment.
-y, --accountUri=accountUri
Account URI (optional). If not specified a single key can be set in ACCOUNT_URI environment
variable.
--keyStore=keyStore
Path to a folder with multiple key files to load into keystore.
See code: src/commands/leader/update-bags.ts
Add/remove a content ID from the blacklist (adds by default).
USAGE
$ storage-node leader:update-blacklist
OPTIONS
-a, --add=add [default: ] Content ID to add
-h, --help show CLI help
-k, --keyFile=keyFile Path to key file to add to the keyring.
-m, --dev Use development mode
-p, --password=password Password to unlock keyfiles. Multiple passwords can be passed, to
try against all files. If not specified a single password can be
set in ACCOUNT_PWD environment variable.
-r, --remove=remove [default: ] Content ID to remove
-u, --apiUrl=apiUrl [default: ws://localhost:9944] Runtime API URL. Mandatory in
non-dev environment.
-y, --accountUri=accountUri Account URI (optional). If not specified a single key can be set
in ACCOUNT_URI environment variable.
--keyStore=keyStore Path to a folder with multiple key files to load into keystore.
See code: src/commands/leader/update-blacklist.ts
Update storage bucket status (accepting new bags).
USAGE
$ storage-node leader:update-bucket-status
OPTIONS
-h, --help show CLI help
-i, --bucketId=bucketId (required) Storage bucket ID
-k, --keyFile=keyFile Path to key file to add to the keyring.
-m, --dev Use development mode
-p, --password=password Password to unlock keyfiles. Multiple passwords can be passed, to
try against all files. If not specified a single password can be
set in ACCOUNT_PWD environment variable.
-s, --set=(on|off) (required) Sets 'accepting new bags' parameter for the bucket
(on/off).
-u, --apiUrl=apiUrl [default: ws://localhost:9944] Runtime API URL. Mandatory in
non-dev environment.
-y, --accountUri=accountUri Account URI (optional). If not specified a single key can be set
in ACCOUNT_URI environment variable.
--keyStore=keyStore Path to a folder with multiple key files to load into keystore.
See code: src/commands/leader/update-bucket-status.ts
Update data size fee. Requires storage working group leader permissions.
USAGE
$ storage-node leader:update-data-fee
OPTIONS
-f, --fee=fee (required) New data size fee
-h, --help show CLI help
-k, --keyFile=keyFile Path to key file to add to the keyring.
-m, --dev Use development mode
-p, --password=password Password to unlock keyfiles. Multiple passwords can be passed, to
try against all files. If not specified a single password can be
set in ACCOUNT_PWD environment variable.
-u, --apiUrl=apiUrl [default: ws://localhost:9944] Runtime API URL. Mandatory in
non-dev environment.
-y, --accountUri=accountUri Account URI (optional). If not specified a single key can be set
in ACCOUNT_URI environment variable.
--keyStore=keyStore Path to a folder with multiple key files to load into keystore.
See code: src/commands/leader/update-data-fee.ts
Update data object bloat bond value. Requires storage working group leader permissions.
USAGE
$ storage-node leader:update-data-object-bloat-bond
OPTIONS
-h, --help show CLI help
-k, --keyFile=keyFile Path to key file to add to the keyring.
-m, --dev Use development mode
-p, --password=password Password to unlock keyfiles. Multiple passwords can be passed, to
try against all files. If not specified a single password can be
set in ACCOUNT_PWD environment variable.
-u, --apiUrl=apiUrl [default: ws://localhost:9944] Runtime API URL. Mandatory in
non-dev environment.
-v, --value=value (required) New data object bloat bond value
-y, --accountUri=accountUri Account URI (optional). If not specified a single key can be set
in ACCOUNT_URI environment variable.
--keyStore=keyStore Path to a folder with multiple key files to load into keystore.
See code: src/commands/leader/update-data-object-bloat-bond.ts
Update number of storage buckets used in the dynamic bag creation policy.
USAGE
$ storage-node leader:update-dynamic-bag-policy
OPTIONS
-h, --help show CLI help
-k, --keyFile=keyFile Path to key file to add to the keyring.
-m, --dev Use development mode
-n, --number=number (required) New storage buckets number
-p, --password=password Password to unlock keyfiles. Multiple passwords can be passed,
to try against all files. If not specified a single password
can be set in ACCOUNT_PWD environment variable.
-t, --bagType=(Channel|Member) (required) Dynamic bag type (Channel, Member).
-u, --apiUrl=apiUrl [default: ws://localhost:9944] Runtime API URL. Mandatory in
non-dev environment.
-y, --accountUri=accountUri Account URI (optional). If not specified a single key can be
set in ACCOUNT_URI environment variable.
--keyStore=keyStore Path to a folder with multiple key files to load into
keystore.
See code: src/commands/leader/update-dynamic-bag-policy.ts
Update VoucherMaxObjectsSizeLimit and VoucherMaxObjectsNumberLimit for the Joystream node storage.
USAGE
$ storage-node leader:update-voucher-limits
OPTIONS
-h, --help show CLI help
-k, --keyFile=keyFile Path to key file to add to the keyring.
-m, --dev Use development mode
-o, --objects=objects (required) New 'max voucher object number limit' value
-p, --password=password Password to unlock keyfiles. Multiple passwords can be passed, to
try against all files. If not specified a single password can be
set in ACCOUNT_PWD environment variable.
-s, --size=size (required) New 'max voucher object size limit' value
-u, --apiUrl=apiUrl [default: ws://localhost:9944] Runtime API URL. Mandatory in
non-dev environment.
-y, --accountUri=accountUri Account URI (optional). If not specified a single key can be set
in ACCOUNT_URI environment variable.
--keyStore=keyStore Path to a folder with multiple key files to load into keystore.
See code: src/commands/leader/update-voucher-limits.ts
Accept pending storage bucket invitation.
USAGE
$ storage-node operator:accept-invitation
OPTIONS
-h, --help show CLI help
-i, --bucketId=bucketId (required) Storage bucket ID
-k, --keyFile=keyFile Path to key file to add to the keyring.
-m, --dev Use development mode
-p, --password=password Password to unlock keyfiles. Multiple passwords
can be passed, to try against all files. If not
specified a single password can be set in
ACCOUNT_PWD environment variable.
-t, --transactorAccountId=transactorAccountId (required) Transactor account ID (public key)
-u, --apiUrl=apiUrl [default: ws://localhost:9944] Runtime API URL.
Mandatory in non-dev environment.
-w, --workerId=workerId (required) Storage operator worker ID
-y, --accountUri=accountUri Account URI (optional). If not specified a
single key can be set in ACCOUNT_URI
environment variable.
--keyStore=keyStore Path to a folder with multiple key files to
load into keystore.
See code: src/commands/operator/accept-invitation.ts
Set metadata for the storage bucket.
USAGE
$ storage-node operator:set-metadata
OPTIONS
-e, --endpoint=endpoint Root distribution node endpoint
-h, --help show CLI help
-i, --bucketId=bucketId (required) Storage bucket ID
-j, --jsonFile=jsonFile Path to JSON metadata file
-k, --keyFile=keyFile Path to key file to add to the keyring.
-m, --dev Use development mode
-p, --password=password Password to unlock keyfiles. Multiple passwords can be passed, to
try against all files. If not specified a single password can be
set in ACCOUNT_PWD environment variable.
-u, --apiUrl=apiUrl [default: ws://localhost:9944] Runtime API URL. Mandatory in
non-dev environment.
-w, --workerId=workerId (required) Storage operator worker ID
-y, --accountUri=accountUri Account URI (optional). If not specified a single key can be set
in ACCOUNT_URI environment variable.
--keyStore=keyStore Path to a folder with multiple key files to load into keystore.
See code: src/commands/operator/set-metadata.ts
Starts the storage node server.
USAGE
$ storage-node server
OPTIONS
-b, --buckets=buckets
[default: ] Comma separated list of bucket IDs to service. Buckets that are not assigned to
worker are ignored. If not specified all buckets will be serviced.
-c, --cleanup
Enable cleanup/pruning of no-longer assigned assets.
-d, --uploads=uploads
(required) Data uploading directory (absolute path).
-e, --elasticSearchEndpoint=elasticSearchEndpoint
Elasticsearch endpoint (e.g.: http://some.com:8081).
Log level could be set using the ELASTIC_LOG_LEVEL environment variable.
Supported values: warn, error, debug, info. Default:debug
-h, --help
show CLI help
-i, --cleanupInterval=cleanupInterval
[default: 360] Interval between periodic cleanup actions (in minutes)
-i, --syncInterval=syncInterval
[default: 20] Interval between synchronizations (in minutes)
-k, --keyFile=keyFile
Path to key file to add to the keyring.
-l, --logFilePath=logFilePath
Absolute path to the rolling log files.
-m, --dev
Use development mode
-n, --logMaxFileNumber=logMaxFileNumber
[default: 7] Maximum rolling log files number.
-o, --port=port
(required) Server port.
-p, --password=password
Password to unlock keyfiles. Multiple passwords can be passed, to try against all files. If
not specified a single password can be set in ACCOUNT_PWD environment variable.
-q, --storageSquidEndpoint=storageSquidEndpoint
(required) [default: http://localhost:4352/graphql] Storage Squid graphql server endpoint
(e.g.: http://some.com:4352/graphql)
-r, --syncWorkersNumber=syncWorkersNumber
[default: 20] Sync workers number (max async operations in progress).
-s, --sync
Enable data synchronization.
-t, --syncWorkersTimeout=syncWorkersTimeout
[default: 30] Asset downloading timeout for the syncronization (in minutes).
-u, --apiUrl=apiUrl
[default: ws://localhost:9944] Runtime API URL. Mandatory in non-dev environment.
-w, --worker=worker
(required) Storage provider worker ID
-x, --logMaxFileSize=logMaxFileSize
[default: 50000000] Maximum rolling log files size in bytes.
-y, --accountUri=accountUri
Account URI (optional). If not specified a single key can be set in ACCOUNT_URI environment
variable.
-z, --logFileChangeFrequency=(yearly|monthly|daily|hourly|none)
[default: daily] Log files update frequency.
--elasticSearchIndexPrefix=elasticSearchIndexPrefix
Elasticsearch index prefix. Node ID will be appended to the prefix. Default: logs-colossus.
Can be passed through ELASTIC_INDEX_PREFIX environment variable.
--elasticSearchPassword=elasticSearchPassword
Elasticsearch password for basic authentication. Can be passed through ELASTIC_PASSWORD
environment variable.
--elasticSearchUser=elasticSearchUser
Elasticsearch user for basic authentication. Can be passed through ELASTIC_USER environment
variable.
--keyStore=keyStore
Path to a folder with multiple key files to load into keystore.
--maxBatchTxSize=maxBatchTxSize
[default: 20] Maximum number of `accept_pending_data_objects` in a batch transactions.
--pendingFolder=pendingFolder
Directory to store pending files which are being uploaded (absolute path).
If not specified a subfolder under the uploads directory will be used.
--syncRetryInterval=syncRetryInterval
[default: 3] Interval before retrying failed synchronization run (in minutes)
--tempFolder=tempFolder
Directory to store tempory files during sync (absolute path).
If not specified a subfolder under the uploads directory will be used.
See code: src/commands/server.ts
Runs the data objects cleanup/pruning workflow. It removes all the local stored data objects that the operator is no longer obliged to store
USAGE
$ storage-node util:cleanup
OPTIONS
-b, --bucketId=bucketId (required) The buckerId to sync prune/cleanup
-d, --uploads=uploads (required) Data uploading directory (absolute
path).
-h, --help show CLI help
-k, --keyFile=keyFile Path to key file to add to the keyring.
-m, --dev Use development mode
-p, --cleanupWorkersNumber=cleanupWorkersNumber [default: 20] Cleanup/Pruning workers number
(max async operations in progress).
-p, --password=password Password to unlock keyfiles. Multiple
passwords can be passed, to try against all
files. If not specified a single password can
be set in ACCOUNT_PWD environment variable.
-q, --queryNodeEndpoint=queryNodeEndpoint [default: http://localhost:4352/graphql]
Storage Squid graphql server endpoint (e.g.:
http://some.com:4352/graphql)
-u, --apiUrl=apiUrl [default: ws://localhost:9944] Runtime API
URL. Mandatory in non-dev environment.
-w, --workerId=workerId (required) Storage node operator worker ID.
-y, --accountUri=accountUri Account URI (optional). If not specified a
single key can be set in ACCOUNT_URI
environment variable.
--keyStore=keyStore Path to a folder with multiple key files to
load into keystore.
See code: src/commands/util/cleanup.ts
Downloads all data objects of specified bucket, that matches worker ID obligations.
USAGE
$ storage-node util:fetch-bucket
OPTIONS
-b, --bucketId=bucketId (required) The buckerId to fetch
-d, --uploads=uploads (required) Data uploading directory
(absolute path).
-h, --help show CLI help
-n, --syncWorkersNumber=syncWorkersNumber [default: 20] Sync workers number (max
async operations in progress).
-o, --dataSourceOperatorUrl=dataSourceOperatorUrl Storage node url base (e.g.:
http://some.com:3333) to get data from.
-q, --queryNodeEndpoint=queryNodeEndpoint [default: http://localhost:4352/graphql]
Storage Squid graphql server endpoint
(e.g.: http://some.com:4352/graphql)
-t, --syncWorkersTimeout=syncWorkersTimeout [default: 30] Asset downloading timeout for
the syncronization (in minutes).
--tempFolder=tempFolder Directory to store tempory files during
sync and upload (absolute path).
,Temporary directory (absolute path). If
not specified a subfolder under the uploads
directory will be used.
See code: src/commands/util/fetch-bucket.ts
Creates a multihash (blake3) for a file.
USAGE
$ storage-node util:multihash
OPTIONS
-f, --file=file (required) Path for a hashing file.
-h, --help show CLI help
See code: src/commands/util/multihash.ts
The command verifies bag id supported by the storage node. Requires chain connection.
USAGE
$ storage-node util:verify-bag-id
OPTIONS
-h, --help
show CLI help
-i, --bagId=bagId
(required) Bag ID. Format: {bag_type}:{sub_type}:{id}.
- Bag types: 'static', 'dynamic'
- Sub types: 'static:council', 'static:wg', 'dynamic:member', 'dynamic:channel'
- Id:
- absent for 'static:council'
- working group name for 'static:wg'
- integer for 'dynamic:member' and 'dynamic:channel'
Examples:
- static:council
- static:wg:storage
- dynamic:member:4
See code: src/commands/util/verify-bag-id.ts