diff --git a/Build/grest-changelog/index.html b/Build/grest-changelog/index.html index 244e20ba5..77917ef89 100644 --- a/Build/grest-changelog/index.html +++ b/Build/grest-changelog/index.html @@ -863,6 +863,47 @@
This will be first major [breaking] release for Koios consumers in a while, and will be rolled out under new base prefix (/api/v1
).
+The major work with this release was to start making use of newer flags in dbsync which help performance of queries under new endpoints. Also, you'd see quite a few new endpoint additions below, that'd be helping out with slightly lighter version of queries. To keep migration paths easier, we will ensure both v0 and v1 versions of the release is up for a month post release, before retiring v0.
/pool_registrations
- List of all pool registrations initiated in the requested epoch #239/pool_retirements
- List of all pool retirements initiated in the requested epoch #239/treasury_withdrawals
- List of withdrawals made from treasury #239/reserve_withdrawals
- List of withdrawals made from reserves (MIRs) #239/account_txs
- Transactions associated with a given stake address #239/address_utxos
- Get UTxO details for requested addresses #239/asset_utxos
- Get UTxO details for requested assets #239/script_utxos
- Get UTxO details for requested script hashes #239/utxo_info
- Details for requested UTxO arrays #239/script_info
- Information about a given script FROM script hashes #239/ogmios/
- Expose stateless ogmios endpoints #1690/account_utxos
, /credential_utxos
- Accept extended
as an additional flag - which enables asset_list
, reference_script
and inline_datum
to the output #239/block_txs
- Flatten output with transaction details (tx_hash
, epoch_no
, block_height
, block_time
) instead of tx_hashes
array #239/epoch_params
- Update cost_models
to JSON (upstream change in node) #239/account_assets
, /address_assets
- Flatten the output result (instead of asset_list
array) making it easier to apply horizontal filtering based on any of the fields/account_utxos
, /address_utxos
, /asset_utxos
, /script_utxos
and /utxo_info
to return same schema giving complete details about UTxOs involved, with few fields toggled based on extended
input flag #239/pool_list
- Add various details to the endpoint for each pool (pool_id_hex
,active_epoch_no
,margin
,fixed_cost
,pledge
,reward_addr
,owners
,relays
,ticker
,meta_url
,meta_hash
,pool_status
,retiring_epoch
) - this should mean some of the requests to pool_info
should no longer be required #239/pool_updates
- In v0, pool_updates
only provided pool registration updates, while pool_status
corresponded to current status of pool. With v1, we will have registration as well as deregistration transactions, and each transaction will have update_type
(enum of either registration
or deregistration
) instead of pool_status
. As a side-effect, since a registration transaction only has retiring_epoch
as metadata, all the other fields will show up as null
for such a transaction #239/pool_metadata
, /pool_relays
- Add pool_status
field to denote whether pool is retired #239/datum_info
- Rename hash
to datum_hash
and add creation_tx_hash
#239/native_script_list
- Remove script
column (as it has pretty large output better queried against script_info
), add size
and change type
to text #239/plutus_script_list
- Add type
and size
to output #239/asset_info
- Add cip68_metadata
JSONB field #239/pool_history
- Add member_rewards #225/tx_utxos
- No longer required as replaced by /utxo_info
#239v1
from v0
#1690epoch_info_cache
Remove protocol parameters, as they can be queried from live table. Accordingly update dependent queries #239consumed_by_tx_in_id
column in tx_out
from dbsync 13.1.1.3 across endpoints #239_last_active_stake_validated_epoch
in active_stake_cache #222The release is effectively same as 1.0.10rc
except with one minor modification below.
cs.[{"key":"value"}]
in PostgREST #172This release primarily focuses on ability to support better DeFi projects alongwith some value addition for existing clients by bringing in 10 new endpoints (paired with 2 deprecations), and few additional optional input parameters , and some additional output columns to existing endpoints. The only breaking change/fix is for output returned for tx_info
.
Also, dbsync 13.1.x.x has been released and is recommended to be used for this release
-/policy_asset_list
- Returns list of asset under the given policy (including supply) #142, #149/account_addresses
- Add optional _first_only
and _empty
flags to show only first address with tx or to include empty addresses to output #149/epoch_info
- Add optional _include_next_epoch
field to show next epoch stats if available (eg: nonce, active stake) #143/tx_info
- Change _invalid_before
and _invalid_after
to text field #141tx_info
- Remove the field plutus_contracts
> [array] > outputs
as there is no logic to connect it to inputs spending #163/asset_address_list
- Renamed to asset_addresses
keeping naming line with other endpoints (old one still present, but will be deprecated in future release) #149/asset_policy_info
- Renamed to policy_asset_info
keeping naming line with other endpoints (old one still present, but will be deprecated in future release) #149/epoch_info
, /epoch_params
- Restrict output to current epoch #149/block_info
- Use /previous_id
field to show previous/next blocks (previously was using block_id/height) #145This release candidate is non-breaking for existing methods and inputs, but breaking for output objects for endpoints. The aim with release candidate version is to allow folks couple of weeks to test, adapt their libraries before applying to mainnet.
-datum_info
- List of datum information for given datum hashesaccount_info_cached
- Same as account_info
, but serves cached information instead of live oneaddress_info
, address_assets
, account_assets
, tx_info
, asset_list
asset_summary
to align output asset_list
object to return array of policy_id
, asset_name
, fingerprint
(and quantity
, minting_txs
where applicable) #120asset_history
- Fix metadata to wrap in array to refer to right object #122tx_info
and tx_metadata
- Align metadata for JSON output format #1542blocks
- Query Output aligned to specs (epoch
=> epoch_no
)pool_delegators_history
- Provides historical record for pool's delegators #1486pool_stake_snapshot
- Provides mark, set and go snapshot values for pool being queried. #1489pool_delegators
- No longer accepts _epoch_no
as parameter, as it only returns live delegators. Additionally provides latest_delegation_hash
in output. #1486tx_info
- epoch
=> epoch_no
#1494The format is based on Keep a Changelog, and this adheres to Semantic Versioning.
+test_koios
call from cntools.library to cntools.shdialog
by default, it is an optional component - and no longer installed by default.--whole-utxo
flag, as it returns all address and will not accept --address
--cold-verification-key-file
instead of --verification-key-file
pool >> show
stake distribution showing up as always 0.The docker container has an optional backup and restore functionality that can be used to backup the /opt/cardano/cnode/db
directory. To have the
+backup persist longer than the countainer, the backup directory should be mounted as a volume.
[!NOTE] +The backup and restore functionality is disabled by default.
+[!WARNING] +Make sure adequate space exists on the host as the backup will double the space consumed by the database.
+When the container is started with the ENABLE_BACKUP environment variable set to Y the container will automatically create a
+backup in the /opt/cardano/cnode/backup/$NETWORK-db
directory. The backup will be created when the container is started and if the
+backup directory is smaller than the db directory.
When the container is started with the ENABLE_RESTORE environment variable set to Y the container will automatically restore
+the latest backup from the /opt/cardano/cnode/backup/$NETWORK-db
directory. The database will be restored when the container is started
+and if the backup directory is larger than the db directory.
This documentation site (rather the repository itself) is created by some of the well known and experienced community members and contains instructions/information about various guild tools which simplify various stake-ops (setting up, managing and monitoring pools) for operators. Note that the guides are present to help you simplify your tasks - but as an entity responsible for creating blocks on a financial platform, we expect some basic pre-requisite skill sets - at professional level - before entering the portal:
cardano-cli
, and have worked on preview/preprod/guild networks for pool operations without use of wrapper scripts - as an education exercise;Everyone is welcome to contribute to the repository (via documentation, testing, code, videos, etc). Our aim is to work together and reduce confusion rather than hosting 100 versions of documentation - each marketing their pool in a way.
"},{"location":"#support","title":"Support","text":"The Telegram Support channel is used to announce new releases and changes to the code base. This is also the place to ask general questions regarding the documentation and scripts on this site.
To report bugs and issues with scripts and documentation please open a GitHub Issue. Feature requests are best opened as a discussion thread.
"},{"location":"#getting-started","title":"Getting Started","text":"Use the sidebar to navigate through the topics. Note that the instructions assume the folder structure as per here.
Again, Feedback/Contribution and ownership of tasks is always welcome. If you're interested in collaborating regularly, make a start - and you should be part of the guild already .
"},{"location":"basics/","title":"Basics","text":""},{"location":"basics/#architecture","title":"Architecture","text":"The architecture for various components are already described at docs.cardano.org by CF/IOHK. We will not reinvent the wheel
"},{"location":"basics/#manual-software-pre-requirements","title":"Manual Software Pre-Requirements","text":"While we do not intend to hand out step-by-step instructions, the tools are often misused as a shortcut to avoid ensuring base skillsets mentioned on home page. Some of the common gotchas that we often find SPOs to miss out on:
- It is imperative that pools operate with highly accurate system time, in order to propogate blocks to network in a timely manner and avoid penalties to own (or at times other competing) blocks. Please refer to sample guidance [here ](https://ubuntu.com/server/docs/network-ntp) for details - the precise steps may depend on your OS.\n- Ensure your Firewall rules at Network as well as OS level are updated according to the usage of your system, you'd want to whitelist the rules that you really need to open to world (eg: You might need node, SSH, and potentially secured webserver/proxy ports to be open, depending on components you run).\n- Update your SSH Configuration to prevent password-based logon.\n- Ensure that you use offline workflow, you should never require to have your offline keys on online nodes. The tools provide you backup/restore functionality to only pass online keys to online nodes.\n
"},{"location":"basics/#pre-requisites","title":"Pre-Requisites","text":"Reminder !!
You're expected to run the commands below from same session, using same working directories as indicated and using a non-root user with sudo access
. You are expected to be familiar with this as part of pre-requisite skill sets for stake pool operators.
The pre-requisites for Linux systems are automated to be executed as a single script. To download the pre-requisites scripts, execute the below:
mkdir \"$HOME/tmp\";cd \"$HOME/tmp\"\n# Install curl\n# CentOS / RedHat - sudo dnf -y install curl\n# Ubuntu / Debian - sudo apt -y install curl\ncurl -sS -o guild-deploy.sh https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/guild-deploy.sh\nchmod 755 guild-deploy.sh\n
Please familiarise with the syntax of guild-deploy.sh
before proceeding. The usage syntax can be checked using ./guild-deploy.sh -h
, sample output below:
Usage: guild-deploy.sh [-n <mainnet|preprod|guild|preview>] [-p path] [-t <name>] [-b <branch>] [-u] [-s [p][b][l][f][d][c][o][w][x]]\nSet up dependencies for building/using common tools across cardano ecosystem.\nThe script will always update dynamic content from existing scripts retaining existing user variables\n\n-n Connect to specified network instead of mainnet network (Default: connect to cardano mainnet network) eg: -n guild\n-p Parent folder path underneath which the top-level folder will be created (Default: /opt/cardano)\n-t Alternate name for top level folder - only alpha-numeric chars allowed (Default: cnode)\n-b Use alternate branch of scripts to download - only recommended for testing/development (Default: master)\n-u Skip update check for script itself\n-s Selective Install, only deploy specific components as below:\n p Install common pre-requisite OS-level Dependencies for most tools on this repo (Default: skip)\nb Install OS level dependencies for tools required while building cardano-node/cardano-db-sync components (Default: skip)\nl Build and Install libsodium fork from IO repositories (Default: skip)\nf Force overwrite entire content of scripts and config files (backups of existing ones will be created) (Default: skip)\nd Download latest (released) binaries for bech32, cardano-address, cardano-node, cardano-cli, cardano-db-sync and cardano-submit-api binaries (Default: skip)\nc Install/Upgrade CNCLI binary (Default: skip) # (1)!\no Install/Upgrade Ogmios Server binary (Default: skip)\nw Install/Upgrade Cardano Hardware CLI (Default: skip)\nx Install/Upgrade Cardano Signer binary (Default: skip)\n
glibc
, it would likely be due to the build mismatch between pre-compiled binary and your OS, which is not uncommon. You may need to compile cncli manually on your OS as per instructions here - make sure to copy the output binary to \"${HOME}/.local/bin\"
folder.This script uses opt-in election of what you'd like the script to do (as against previous version that used to try and auto-detect versions). The defaults without any arguments will only update static part of script contents for you. A typical example install to install most components but not overwrite static part of existing files for preview network would be:
./guild-deploy.sh -b master -n preview -t cnode -s pdlcowx\n. \"${HOME}/.bashrc\"\n
If instead of download, you'd want to build the components yourself, you could use:
./guild-deploy.sh -b master -n preview -t cnode -s pblcowx\n. \"${HOME}/.bashrc\"\n
Lastly, if you'd want to update your scripts but not install any additional dependencies, you may simply run:
./guild-deploy.sh -b master -n preview -t cnode\n
"},{"location":"basics/#folder-structure","title":"Folder structure","text":"Running the script above will create the folder structure as per below, for your reference. You can replace the top level folder /opt/cardano/cnode
by editing the value of CNODE_HOME
in ~/.bashrc
and $CNODE_HOME/files/env
files:
/opt/cardano/cnode # Top-Level Folder\n\u251c\u2500\u2500 ...\n\u251c\u2500\u2500 files # Config, genesis and topology files\n\u2502 \u251c\u2500\u2500 ...\n\u2502 \u251c\u2500\u2500 byron-genesis.json # Byron Genesis file referenced in config.json\n\u2502 \u251c\u2500\u2500 shelley-genesis.json # Genesis file referenced in config.json\n\u2502 \u251c\u2500\u2500 alonzo-genesis.json # Alonzo Genesis file referenced in config.json\n\u2502 \u251c\u2500\u2500 config.json # Config file used by cardano-node\n\u2502 \u2514\u2500\u2500 topology.json # Map of chain for cardano-node to boot from\n\u251c\u2500\u2500 db # DB Store for cardano-node\n\u251c\u2500\u2500 guild-db # DB Store for guild-specific tools and additions (eg: cncli, cardano-db-sync's schema)\n\u251c\u2500\u2500 logs # Logs for cardano-node\n\u251c\u2500\u2500 priv # Folder to store your keys (permission: 600)\n\u251c\u2500\u2500 scripts # Scripts to start and interact with cardano-node\n\u2514\u2500\u2500 sockets # Socket files created by cardano-node\n
"},{"location":"build/","title":"Overview","text":"The documentation here uses instructions from IOHK repositories as a foundation, with additional info which we can contribute to where appropriate. Note that not everyone needs to build each component. You can refer to architecture to understand and qualify which of the components built by IO you want to run.
"},{"location":"build/#components","title":"Components","text":"For most Pool Operators, simply building cardano-node should be enough. Use the below to decide whether you need other components:
graph TB A([Interact with HD Walletslocally]) B([Explore blockchainlocally]) C([Easy pool-ops andfund management]) D([Create Custom Assets]) E([Monitor node using Terminal UI]) F([Sign/verify any datausing crypto keys]) N(Node) O(Ogmios) P(gRest/Koios) Q(DBSync) R(Wallet) S(CNTools) T(Tx Submit API) U(GraphQL) V(OfflineMetadataTools) X(gLiveView) Y(cardano-signer) Z[(PostgreSQL)] N --x C --x S N --x D --x S & V N --x E --x X N --x B B --x U --x Q B --x P --x Q P --x O P --x T F ---x Y N --x A --x R Q --x ZImportant
We strongly prefer use of gRest over GraphQL components due to performance, security, simplicity, control and most importantly - consistency benefits. Please refer to official documentations if you're interested in GraphQL
or Cardano-Rest
components instead.
Note
The instructions are intentionally limited to stack/cabal** to avoid wait times/availability of nix/docker files on a rapidly developing codebase - this also helps us prevent managing multiple versions of instructions.
"},{"location":"build/#description-for-components-built-by-community","title":"Description for components built by community","text":""},{"location":"build/#cntools","title":"CNTools","text":"A swiss army knife for pool operators, primarily built by Ola, to simplify typical operations regarding their wallet keys and pool management. You can read more about it here
"},{"location":"build/#gliveview","title":"gLiveView","text":"A local node monitoring tool, primarily built by Ola, to use in addition to remote monitoring tools like Prometheus/Grafana, Zabbix or IOG's RTView. This is especially useful when moving to a systemd deployment - if you haven't done so already - as it offers an intuitive UI to monitor the node status. You can read more about it here
"},{"location":"build/#topology-updater","title":"Topology Updater","text":"A temporary node-to-node discovery solution, run by Markus, that was started initially to bridge the gap created while awaiting completion of P2P on cardano network, but has since become an important lifeline to the network health - to allow everyone to activate their relay nodes without having to postpone and wait for manual topology completion requests. You can read more about it here
"},{"location":"build/#koiosgrest","title":"Koios/gRest","text":"A full-featured local query layer node to explore blockchain data (via dbsync) using standardised pre-built queries served via API as per standard from Koios - for which user can opt to participate in elastic query layer. You can read more about build steps here and reference API endpoints here
"},{"location":"build/#ogmios","title":"Ogmios","text":"A lightweight bridge interface for cardano-node. It offers a WebSockets API that enables local clients to speak Ouroboros' mini-protocols via JSON/RPC. You can read more about it here
"},{"location":"build/#cncli","title":"CNCLI","text":"A CLI tool written in Rust by Andrew Westberg for low-level communication with cardano-node. It is commonly used by SPOs to check their leader logs (integrates with CNTools as well as gLiveView) or to send their pool's health information to https://pooltool.io. You can read more about it here
"},{"location":"build/#cardano-signer","title":"Cardano Signer","text":"A tool written by Martin to sign/verify data (hex, text or binary) using cryptographic keys to generate data as per CIP-8 or CIP-36 standards. You can read more about it here
"},{"location":"contributors/","title":"Contributors","text":"Everyone is welcome to contribute to the guide, as well as the repository. Below is just a thank you to people who have been contributing consistently:
Adam Chris Damjan Homer Markus OCG Ola Ahlman Pal Dorogi Papacarp PegasusPool Psychomb RdLrT RedOracle SmaugPool
To start contributing, simply hit the github repository and raise Issue/Pull Request
"},{"location":"grest-meets/","title":"GRest Meeting summaries","text":"Thank you all for joining and contributing to the project
Below you can find a short summary of every GRest meeting held, both for logging purposes and for those who were not able to attend.
"},{"location":"grest-meets/#participants","title":"Participants:","text":"Participant 16Sep2021 02Sep2021 26Aug2021 19Aug2021 12Aug2021 29Jul2021 22Jul2021 15Jul2021 09Jul2021 02Jul2021 25Jun2021 Damjan Homer Markus Ola RdLrT Red Papacarp Paddy GimbaLabs 16Sep2021 02Sep2021 26Aug2021 19Aug2021 12Aug2021 29Jul2021 22Jul2021 15Jul2021 09Jul2021 02Jul2021After the initial stand-up updates from participants, we went through the entire Trello board, updating/deleting existing tickets and creating some new ones.
25Jun2021"},{"location":"grest-meets/#scheduling-running-update-queries","title":"Scheduling running update queries","text":"Solution being tested:
Pool cache table:
we will run the full query on regular intervals, ready for review for first iteration, will see about delta post tx cache query
transaction history:
need to think about how to approach inputs/outputs in the cached table (1 row per transaction with json objects for inputs/outputs or multiple rows for tx hash)
address_txs:
this endpoint should bring back list of txs, and have provision to use after and before block hash - lightweight against public schema
pool cache table:
create a trigger every 2 minutes (or similar) to run stake_distribution query
docker:
EXPLAIN (ANALYZE, BUFFERS)
Team
grest
schema)Individual
84226d33eed66be8e61d50b7e1dacebdc095cee9
on release/10.1.x
<query>.json
and sql in <query>.sql
), also remove get_
prefixnbthreads
in config, tune maxconn, switch to http mode)Ola added automatic deployment of services to the scripts last week. We added new tasks on Trello ticket, including flags for multiple networks (guild, testnet, mainnet), haproxy service dynamically creating hosts and doc updates. Overall, the script works well with some manual interaction still required at the moment.
"},{"location":"grest-meets/#supported-networks","title":"Supported Networks","text":"Just for the record here, a 16GB (or even 8GB) instance is enough to support both testnet and guild networks.
"},{"location":"grest-meets/#db-sync-versioning","title":"db-sync versioning","text":"We agreed to use the release/10.1.x
branch which is not yet released but built to include Alonzo migrations to avoid rework later. This version does require Alonzo config and hash to be in the node's config.json
. This has to be done manually and the files are available here. Once fully released, all members should rebuild the released version to ensure each instance is running the same code.
For the DNS setup ticket, we started to think about the instance names for the 2 DNS instances (orange in the graph). Submissions for names will be made in the Telegram group, and will probably make a poll once we have the entries finalised.
"},{"location":"grest-meets/#monitoring-system","title":"Monitoring System","text":"Priyank started setting up the monitoring on his instance which can then easily be switched to a separate monitoring instance. We agreed to use Prometheus / Grafana combo for data source / visualisation. We'll probably need to create some custom archiving of data to keep it long term as Prometheus stores only the last 30 days of data.
"},{"location":"grest-meets/#next-meeting","title":"Next meeting","text":"We would like to make Friday @ 07:00 UTC the standard time and keep meetings at weekly frequency. A poll will still be created for next weeks, but if there are no objections / requests for switching the time around (which we have not had so far) we can go ahead with the making Friday the standard with polls no longer required and only reminders / Google invites sent every week.
"},{"location":"grest-meets/#deployment-scripts_1","title":"Deployment scripts","text":"During the last week, work has been done on deployment scripts for all services (db-sync, gRest and haproxy) -> this is now in testing with updated instructions on trello. Everybody can put their name down on the ticket to signify when the setup is complete and note down any comments for bugs/improvements. This is the main priority at the moment as it would allow us to start transferring our setups to mainnet.
"},{"location":"grest-meets/#switch-to-mainnet","title":"Switch to Mainnet","text":"Following on from that, we created a ticket for starting to set up mainnet instances -> we can use 32GB RAM to start and increase later. While making sure everything works against the guild network is priority, people are free to start on this as well as we anticipate we are almost ready for the switch.
"},{"location":"grest-meets/#supported-networks_1","title":"Supported Networks","text":"This brings me to another discussion point which is on which networks are to be supported. After some discussion, it was agreed to keep beefy servers for mainnet, and have small independent instances for testnet maintained by those interested, while guild instance is pretty lightweight and useful to keep.
"},{"location":"grest-meets/#monitoring-system_1","title":"Monitoring System","text":"The ticket for creating a centralised monitoring system was discussed and updated. I would say it would be good to have at least a basic version of the system in place around the time we switch to mainnet. The system could eventually serve for: - analysis of instance - performances and subsequent tuning - endpoints usage - anticipation of system requirements increases - etc.
I would say that this should be an important topic of the next meeting to come up with an approach on how we will structure this system so that we can start building it in time for mainnet switch.
"},{"location":"grest-meets/#handling-ssl","title":"Handling SSL","text":"Enabling SSL was agreed to not be required by each instance, but is optional and documentation should be created for how to automate the process of renewing SSL certificates for those wishing to add it to their instance. The end user facing endpoints \"Instance Checker\" will of course be SSL-enabled.
"},{"location":"grest-meets/#next-meeting_1","title":"Next meeting","text":"We somewhat agreed to another meeting next week again at the same time, but some participants aren't 100% for availability. Friday at 07:00 UTC might be a good standard time we hold on to, but I will make a poll like last time so that we can get more info before confirming the meeting.
"},{"location":"grest-meets/#meeting-structure","title":"Meeting Structure","text":"As this was the first meeting, at the start we discussed about the meeting structure. In general, we agreed to something like listed below, but this can definitely change in the future:
1) 2-liner (60s) round the table stand-ups by everyone to sync up on what they were doing / are planning to do / mention struggles etc. This itself often sparks discussions. 2) going through the Trello board tasks with the intention of discussing and possbily assigning them to individuals / smaller groups (maybe 1-2-3 people choose to work together on a single task)
"},{"location":"grest-meets/#stand-ups","title":"Stand-ups","text":"We then proceeded to give a status of where we are individually in terms of what's been done, a summary below:
prereqs.sh
addendum can be done once artifacts are finalised (added a Trello ticket for tracking).All in all, I think we saw that there is need for these meetings as there are a lot of things to discuss and new ideas come up (like the monitoring system). We went for over an hour (~1h15min) and still didn't have enough time to go through the board, we basically only touched the DNS/haproxy part of the board. This tells me that we are in a stage where more frequent meetings are required, weekly instead of biweekly, as we are in the initial stage and it's important to build things right from the start rather than having to refactor later on. With that, the participants in general agreed to another meeting next week, but this will be confirmed in the TG chat and the times can be discussed then.
"},{"location":"sidebar/","title":"Tree","text":"The scripts on guild-operators repository have gone through quite a few changes to accomodate for the below:
prereqs.sh
with guild-deploy.sh
using minimalistic approach (i.e. anything you need to deploy is now required to be specified using command-line arguments). The old prereqs.sh
is left as-is but will no longer be maintained.prereqs.sh -t pvnode
would have created folder structure as /opt/cardano/pvnode
and replaced CNODE_HOME
references within scripts with PVNODE_HOME
. This will no longer be required. The deriving of top level folder will be done relative to scripts folder. Thus, parent of the folder containing env
file will automatically be treated as top level folder, and no longer depend on external environment variable. One may still use them for their own comfort to switch directories.CNODE_HOME
references.\"${HOME}\"/.local/bin
. Previously, we could have had binaries deployed to various locations (\"${HOME}\"/.cabal/bin
for node/CLI binaries, \"${HOME}\"/.cargo/bin
for cncli binary, \"${HOME}\"/bin
for downloaded binaries). This occured because of different compilers used different default locations for their output binariess (cargo for rust, cabal for Haskell, etc). The guild-deploy.sh/cabal-build-all.sh scripts will now provision the binaries to be made available to \"${HOME}\"/.local/bin instead. Ofcourse, as before, you can still customise the location of binaries using variables (eg: CCLI
, CNCLI
, CNODE_HOME
) in env
file.guild-deploy.sh
, giving users both the options.Some of the above required us to add breaking changes to some scripts, but hopefully the above explains the premise for those changes. To ease this one-time upgrade process for existing deployments, we have tried to come up with the guide below, feel free to edit this file to improve the documents based on your experience. Again, apologies in advance to those who do not agree with the above changes (the old code would ofcourse remain unimpacted at tag legacy-scripts
, so if you'd like to stick to old scripts , you can use -b legacy-scripts
for your tools to switch back).
Warning
Make sure you go through upgrade steps for your setup in a non-mainnet environment first!
guild-deploy.sh
(checkout new syntax with guild-deploy.sh -h
) to update all the scripts and files from the guild template. The scripts modified with user content (env
, gLiveView.sh
, topologyUpdater.sh
, cnode.sh
, etc) will be backed up before overwriting. The backed up files will be in the same folder as the original files, and will be named as ${filename}_bkp<timestamp>
. More static files (genesis files or some of the scripts themselves) will not be backed up, as they're not expected to be modified.Remember
Please add any environment-specific parameters (eg: custom top level folder, network flag, etc) to the execution command below, similar to prereqs.sh (check new syntax using guild-deploy.sh -h
)
mkdir \"$HOME/tmp\";cd \"$HOME/tmp\"\ncurl -sS -o guild-deploy.sh https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/guild-deploy.sh\nchmod 700 guild-deploy.sh\n./guild-deploy.sh -s f -b master\n
\"${HOME}\"/.local/bin
is now part of your $PATH environment variable.source \"${HOME}\"/.bashrc\necho \"${PATH}\"\n
Check and add back your customisations to config files (or simply restore from automatically created backup of your config/topology files).
Since one of the basic changes we start to recommend as part of this revamp is moving your binaries to \"${HOME}\"/.local/bin
, you would want to move the binaries below from current location:
cabal build all
script (eg: cardano-node
, cardano-cli
, bech32
, cardano-address
, cardano-submit-api
, cardano-db-sync
cardano install
(eg: cncli
)prereqs.sh
(eg: cardano-hw-cli
)You can move the binaries by using mv command (for example, if you dont have any other files in these folders, you can use the command below:
Note
Ideally, you should shutdown services (eg: cnode, cnode-dbsync, etc) prior to running the below to ensure they run from new location (you can also re-deploy them if you haven't done so in a while, eg: ./cnode.sh -d
). At the end of the guide, you can start them back up.
mv -t \"${HOME}\"/.local/bin/ \"${HOME}\"/.cabal/bin/* \"${HOME}\"/.cargo/bin/* \"${HOME}\"/bin/*\n
We've found users often confuse between $PATH variable resolution between multiple shell sessions, systemd, etc. To avoid this, edit the following files and uncomment and set the following variables to the appropriate paths as per your deployment (eg: CCLI=\"${HOME}\"/.local/bin/cardano-cli
if following above):
The above should take care of tools and services. However, you might still have duplicate binaries in your $PATH (previous artifacts, re-build using old scripts, etc) - it is best that you remove any old binary files from alternate folders. You can do so by executing the below:
whereis bech32 cardano-address cardano-cli cardano-db-sync cardano-hw-cli cardano-node cardano-submit-api cncli ogmios\n
The above might result in some lines having more than one entry (eg: you might have cardano-cli
in \"${HOME}\"/.cabal/bin
and \"${HOME}\"/.local/bin
) - for which you'd want to delete the reference(s) not in \"${HOME}\"/.local/bin
, while for other cases - you might have no values (eg: you may not use cardano-db-sync
, cncli
, ogmios
and/or cardano-hw-cli
. You need not take any actions for the binaries you do not use.
Hope the guide above helps you with the migration, but again - we could've missed some edge cases. If so, please report via chat in Koios Discussions channel only. Please DO NOT make edits to the script content based on forum/alternate guide/channels, while done with best intentions - there have been solutions put online that modify files unnecessarily instead of correcting configs and disabling updates, such actions will only cause trouble for future updates.
"},{"location":"Appendix/RecoverByronWallet/","title":"Unofficial Instructions for recovering your Byron Era funds on the new Incentivized Shelley Testnet","text":""},{"location":"Appendix/RecoverByronWallet/#1-grab-and-install-haskell","title":"1. Grab and install Haskell","text":"curl -sSL https://get.haskellstack.org/ | sh\n
"},{"location":"Appendix/RecoverByronWallet/#2-get-the-wallet","title":"2. Get the wallet","text":"note: you must build from source as of today as there are changes that just got into master you need
git clone https://github.com/input-output-hk/cardano-wallet.git\n
"},{"location":"Appendix/RecoverByronWallet/#3-go-into-the-wallet-directory","title":"3. Go into the wallet directory","text":"cd cardano-wallet\n
"},{"location":"Appendix/RecoverByronWallet/#4-build-the-wallet","title":"4. Build the wallet","text":"stack build --test --no-run-tests\n
If it fails there are a few reasons we have found: - The cardano build instructions reference a few things that may be missing. Check those. - or maybe one of these would help:"},{"location":"Appendix/RecoverByronWallet/#libssl","title":"Libssl:","text":"sudo apt install libssl-dev\n
"},{"location":"Appendix/RecoverByronWallet/#sqlite","title":"Sqlite :","text":"sudo apt-get install sqlite3 libsqlite3-dev \n
"},{"location":"Appendix/RecoverByronWallet/#gmp","title":"gmp:","text":"sudo apt-get install libgmp3-dev \n
"},{"location":"Appendix/RecoverByronWallet/#systemd-dev","title":"systemd dev:","text":"sudo apt install libsystemd-dev\n
get coffee... It takes awhile
"},{"location":"Appendix/RecoverByronWallet/#5-when-its-done-install-executables-to-your-path","title":"5. When its done, install executables to your path","text":"stack install\n
"},{"location":"Appendix/RecoverByronWallet/#6-test-to-make-sure-cardano-wallet-jormungandr-works-fine","title":"6. Test to make sure cardano-wallet-jormungandr works fine.","text":"Generate your new mnemonics you will need below. Note that this generates 15 words as opposed to your byron era mnemnomics which were only 12 words.
cardano-wallet-jormungandr mnemonic generate\n
"},{"location":"Appendix/RecoverByronWallet/#7-launch-the-wallet-as-a-service","title":"7. Launch the wallet as a service.","text":"you can either open another terminal window or use screen or something. anyway, wherever you run this next command you won't be able to use anymore for a terminal until you stop the wallet
change --node-port 3001 to wherever you have your jormungandr rest interface running. for me it was 5001.. so
change --port 3002 to wherever you want to access the wallet interface at. If you have other things running avoid those ports. for most, 3002 should be free
just to future proof these instructions. genesis should be whatever genesis you are on.
cardano-wallet-jormungandr serve --node-port 3001 --port 3002 --genesis-block-hash e03547a7effaf05021b40dd762d5c4cf944b991144f1ad507ef792ae54603197\n
"},{"location":"Appendix/RecoverByronWallet/#8-restore-your-byron-wallet","title":"8. Restore your byron wallet:","text":"--->in another window
replace foo, foo, foo with all your mnemnomics from the byron wallet you are restoring
Also, if you put your wallet on a different port than 3002, fix that too
curl -X POST -H \"Content-Type: application/json\" -d '{ \"name\": \"legacy_wallet\", \"mnemonic_sentence\": [\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\"], \"passphrase\": \"areallylongpassword\"}' http://localhost:3002/v2/byron-wallets\n
Thats going to spit out some information about a wallet it creates, you should see the value of your wallet - hopefully its not zero. And you need the wallet ID for the next step"},{"location":"Appendix/RecoverByronWallet/#9-create-your-shelley-wallet","title":"9. Create your shelley wallet:","text":"Remember all those mnemnomics you made above.. put them here instead of all the foo's.
curl -X POST -H \"Content-Type: application/json\" -d '{ \"name\": \"pool_wallet\", \"mnemonic_sentence\": [\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\"], \"passphrase\": \"areallylongpasswordagain\"}' http://localhost:3002/v2/wallets\n
Important thing to get is the wallet id from this command"},{"location":"Appendix/RecoverByronWallet/#10-migrate-your-funds","title":"10. Migrate your funds","text":"Now you are ready to migrate your wallet. replace the <old wallet id>
and <new wallet id>
with the values you got above
curl -X POST -H \"Content-Type: application/json\" -d '{\"passphrase\": \"areallylongpassword\"}' http://localhost:3002/v2/byron-wallets/<old wallet id>/migrations/<new wallet id>\n
"},{"location":"Appendix/RecoverByronWallet/#11-congratulations-your-funds-are-now-in-your-new-wallet","title":"11. Congratulations. your funds are now in your new wallet.","text":"From here we recommend you send them to a new address entirely owned and created by jcli or whatever method you have been using for the testnet process.
This technically may not be required. But a lot of us did it and we know it works for setting up pools and stuff.
send a small amount first just to make sure you are in control of the transaction and don't send your funds to la la land.
If you want to send to another address use the command below, but replace the address that you want to send it to, the amount, and your <new wallet id>
curl -X POST -H \"Content-Type: application/json\" -d '{\"payments\": [ { \"address\": \"<address to send to>\"\", \"amount\": { \"quantity\": 83333330000000, \"unit\": \"lovelace\" } } ], \"passphrase\": \"areallylongpasswordagain\"}' http://localhost:3002/v2/wallets/<new wallet id>/transactions\n
"},{"location":"Appendix/monitoring/","title":"Monitoring","text":"Ensure the Pre-Requisites are in place before you proceed.
This is an easy-to-use script to automate setting up of monitoring tools. Tasks automates the following tasks: - Installs Prometheus, Node Exporter and Grafana Servers for your respective Linux architecture. - Configure Prometheus to connect to cardano node and node exporter jobs. - Provisions the installed prometheus server to be automatically available as data source in Grafana. - Provisions two of the common grafana dashboards used to monitor cardano-node
by SkyLight and IOHK to be readily consumed from Grafana. - Deploy prometheus
,node_exporter
and grafana-server
as systemd service on Linux. - Start and enable those services.
Note that securing prometheus/grafana servers via TLS encryption and other security best practices are out of scope for this document, and its mainly aimed to help you get started with monitoring without much fuss.
!> Ensure that you've opened the firewall port for grafana server (default used in this script is 5000)
"},{"location":"Appendix/monitoring/#download-setup_monsh","title":"Download setup_mon.sh","text":"If you have run guild-deploy.sh
, you can skip this step. To download monitoring script, you can execute the commands below:
cd $CNODE_HOME/scripts\nwget https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/setup_mon.sh\nchmod 750 setup_mon.sh\n
"},{"location":"Appendix/monitoring/#customise-any-environment-variables","title":"Customise any Environment Variables","text":"The default selection may not always be usable for everyone. You can customise further environment variable settings by opening in editor (eg: vi setup_mon.sh
), and updating variables below to your liking:
#!/usr/bin/env bash\n# shellcheck disable=SC2209,SC2164\n\n######################################################################\n#### Environment Variables\n######################################################################\nCNODE_IP=127.0.0.1\nCNODE_PORT=12798\nGRAFANA_HOST=0.0.0.0\nGRAFANA_PORT=5000\nPROJ_PATH=/opt/cardano/monitoring\nPROM_HOST=127.0.0.1\nPROM_PORT=9090\nNEXP_PORT=$(( PROM_PORT + 1 ))\n````\n\n#### Set up Monitoring\n\nExecute setup_mon.sh with full path to destination folder you want to setup monitoring in. If you're following guild folder structure, you do not need to specify `-d`. Read the usage comments below before you run the actual script.\n\nNote that to deploy services as systemd, the script expect sudo access is available to the user running the script.\n\n``` bash\ncd $CNODE_HOME/scripts\n# To check Usage parameters:\n# ./setup_mon.sh -h\n#Usage: setup_mon.sh [-d directory] [-h hostname] [-p port]\n#Setup monitoring using Prometheus and Grafana for Cardano Node\n#-d directory Directory where you'd like to deploy the packages for prometheus , node exporter and grafana\n#-i IP/hostname IPv4 address or a FQDN/DNS name where your cardano-node (relay) is running (check for hasPrometheus in config.json; eg: 127.0.0.1 if same machine as cardano-node)\n#-p port Port at which your cardano-node is exporting stats (check for hasPrometheus in config.json; eg: 12798)\n./setup_mon.sh\n# \n# Downloading prometheus v2.18.1...\n# Downloading grafana v7.0.0...\n# Downloading exporter v0.18.1...\n# Downloading grafana dashboard(s)...\n# - SKYLight Monitoring Dashboard\n# - IOHK Monitoring Dashboard\n# \n# NOTE: Could not create directory as rdlrt, attempting sudo ..\n# NOTE: No worries, sudo worked !! Moving on ..\n# Configuring components\n# Registering Prometheus as datasource in Grafana..\n# Creating service files as root..\n# \n# =====================================================\n# Installation is completed\n# =====================================================\n# \n# - Prometheus (default): http://127.0.0.1:9090/metrics\n# Node metrics: http://127.0.0.1:12798\n# Node exp metrics: http://127.0.0.1:9091\n# - Grafana (default): http://0.0.0.0:5000\n# \n# \n# You need to do the following to configure grafana:\n# 0. The services should already be started, verify if you can login to grafana, and prometheus. If using 127.0.0.1 as IP, you can check via curl\n# 1. Login to grafana as admin/admin (http://0.0.0.0:5000)\n# 2. Add \"prometheus\" (all lowercase) datasource (http://127.0.0.1:9090)\n# 3. Create a new dashboard by importing dashboards (left plus sign).\n# - Sometimes, the individual panel's \"prometheus\" datasource needs to be refreshed.\n# \n# Enjoy...\n# \n# Cleaning up...\n
"},{"location":"Appendix/monitoring/#view-dashboards","title":"View Dashboards","text":"You should now be able to Login to grafana dashboard, using the public IP of your server, at port 5000. The initial credentials to login would be admin/admin, and you will be asked to update your password upon first login. Once logged on, you should be able to go to Manage > Dashboards
and select the dashboard you'd like to view. Note that if you've just started the server, you might see graphs as empty, as initial interval for dashboards is 12 hours. You can change it to 5 minutes by looking at top right section of the page.
Thanks to Pal Dorogi for the original setup instructions used for modifying.
"},{"location":"Appendix/postgres/","title":"Sample Postgres Setup","text":"These deployment instructions used for reference while building cardano-db-sync tool, with the scope being ease of set up, and some tuning baselines for those who are new to Postgres DB. It is recommended to customise these as per your needs for Production builds.
Important
You'd find it pretty useful to set up ZFS on your system prior to setting up Postgres, to help with your IOPs throughput requirements. You can find sample install instructions here. You can set up your entire root mount to be on ZFS, or you can opt to mount a file as ZFS on \"${CNODE_HOME}\"
"},{"location":"Appendix/postgres/#install-postgresql-server","title":"Install PostgreSQL Server","text":"Execute commands below to set up Postgres Server
# Determine OS platform\nOS_ID=$( (grep -i ^ID_LIKE= /etc/os-release || grep -i ^ID= /etc/os-release) | cut -d= -f 2)\nDISTRO=$(grep -i ^NAME= /etc/os-release | cut -d= -f 2)\n\nif [ -z \"${OS_ID##*debian*}\" ]; then\n#Debian/Ubuntu\nwget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -\n RELEASE=$(lsb_release -cs)\necho \"deb [arch=amd64] http://apt.postgresql.org/pub/repos/apt/ ${RELEASE}\"-pgdg main | sudo tee /etc/apt/sources.list.d/pgdg.list\n sudo apt-get update\n sudo apt-get -y install postgresql-15 postgresql-server-dev-15 postgresql-contrib libghc-hdbc-postgresql-dev\n sudo systemctl restart postgresql\n sudo systemctl enable postgresql\nelse\necho \"We have no automated procedures for this ${DISTRO} system\"\nfi\n
"},{"location":"Appendix/postgres/#create-user-in-postgres","title":"Create User in Postgres","text":"Login to Postgres instance as superuser:
echo $(whoami)\n# <user>\nsudo su postgres\npsql\n
Note the returned as the output of echo $(whoami)
command. Replace all instance of in the documentation below. Execute the below in psql prompt. Replace and PasswordYouWant with your OS user (output of echo $(whoami)
command executed above) and a password you'd like to authenticate to Postgres with:
CREATE ROLE <user> SUPERUSER LOGIN;\nALTER USER <user> PASSWORD 'PasswordYouWant';\n\\q\n
Type exit
at shell to return to your user from postgres"},{"location":"Appendix/postgres/#verify-login-to-postgres-instance","title":"Verify Login to postgres instance","text":"export PGPASSFILE=$CNODE_HOME/priv/.pgpass\necho \"/var/run/postgresql:5432:cexplorer:*:*\" > $PGPASSFILE\nchmod 0600 $PGPASSFILE\npsql postgres\n# psql (15.0)\n# Type \"help\" for help.\n# \n# postgres=#\n
"},{"location":"Appendix/postgres/#tuning-your-instance","title":"Tuning your instance","text":"Before you start populating your DB instance using dbsync data, now might be a good time to put some thought on to baseline configuration of your postgres instance by editing /etc/postgresql/15/main/postgresql.conf
. Typically, you might find a lot of common standard practices parameters available in tuning guides. For our consideration, it would be nice to start with some baselines - for which we will use inputs from example here, which would need to be customised further to your environment and resources.
In a typical Koios [gRest] setup, we use below for minimum viable specs (i.e. 64GB RAM, > 8 CPUs, >16K IOPs for ioping -q -S512M -L -c 10 -s8k .
output when postgres data directory is on ZFS configured with max arc of 4GB), we find the below configuration to be the best common setup:
In addition to above, due to the nature of usage by dbsync (synching from node and restart traversing back to last saved ledger-state snapshot), we leverage data retention on blockchain - as we're not affected by loss of volatile information upon a restart of instance. Thus, we can relax some of the data retention and protection against corruption related settings, as those are IOPs/CPU Load Average impacts that the instance does not need to spend. We'd recommend setting 3 of those below in your /etc/postgresql/15/main/postgresql.conf
:
Once your changes are done, ensure to restart postgres service using sudo systemctl restart postgresql
.
Important
An average pool operator may not require cardano-db-sync at all. Please verify if it is required for your use as mentioned here.
PGPASSFILE
environment variable is set as per the instructions in the sample guide, for db-sync
to be able to connect.Execute the below to clone the cardano-db-sync
repository to $HOME/git
folder on your system:
cd ~/git\ngit clone https://github.com/input-output-hk/cardano-db-sync\ncd cardano-db-sync\n
"},{"location":"Build/dbsync/#build-cardano-db-sync","title":"Build Cardano DB Sync","text":"You can use the instructions below to build the latest release of cardano-db-sync
.
git fetch --tags --all\ngit pull\n# Include the cardano-crypto-praos and libsodium components for db-sync\n# On CentOS 7 (GCC 4.8.5) we should also do\n# echo -e \"package cryptonite\\n flags: -use_target_attributes\" >> cabal.project.local\n# Replace tag against checkout if you do not want to build the latest released version\ngit checkout $(curl -s https://api.github.com/repos/input-output-hk/cardano-db-sync/releases/latest | jq -r .tag_name)\n# Use `-l` argument if you'd like to use system libsodium instead of IOG fork of libsodium while compiling\n$CNODE_HOME/scripts/cabal-build-all.sh\n
The above would copy the cardano-db-sync
binary into ~/.local/bin
folder."},{"location":"Build/dbsync/#prepare-db-for-sync","title":"Prepare DB for sync","text":"Now that binaries are available, let's create our database (when going through breaking changes, you may need to use --recreatedb
instead of --createdb
used for the first time. Again, we expect that PGPASSFILE
environment variable is already set (refer to the top of this guide for sample instructions):
cd ~/git/cardano-db-sync\n# scripts/postgresql-setup.sh --dropdb #if exists already, will fail if it doesnt - thats OK\nscripts/postgresql-setup.sh --createdb\n# Password:\n# Password:\n# All good!\n
Verify you can see \"All good!\" as above!
"},{"location":"Build/dbsync/#create-symlink-to-schema-folder","title":"Create Symlink to schema folder","text":"DBSync instance requires the schema files from the git repository to be present and available to the dbsync instance. You can either clone the ~/git/cardano-db-sync/schema
folder OR create a symlink to the folder and make it available to the startup command we will be using. We will use the latter in sample below:
ln -s ~/git/cardano-db-sync/schema $CNODE_HOME/guild-db/schema\n
"},{"location":"Build/dbsync/#restore-using-snapshot","title":"Restore using Snapshot","text":"If you're running a mainnet/preview/preprod instance of dbsync, you might want to consider use of dbsync snapshots as documented here. The snapshot files as of recent epoch are available via links in release notes.
At high-level, this would involve steps as below (read and update paths as per your environment):
# Replace the actual link below with the latest one from release notes\nwget https://update-cardano-mainnet.iohk.io/cardano-db-sync/13/db-sync-snapshot-schema-13-block-7622755-x86_64.tgz\nrm -rf ${CNODE_HOME}/guild-db/ledger-state ; mkdir -p ${CNODE_HOME}/guild-db/ledger-state\ncd -; cd ~/git/cardano-db-sync\nscripts/postgresql-setup.sh --restore-snapshot /tmp/dbsyncsnap.tgz ${CNODE_HOME}/guild-db/ledger-state\n# The restore may take a while, please be patient and do not interrupt the restore process. Once restore is successful, you may delete the downloaded snapshot as below:\n# rm -f /tmp/dbsyncsnap.tgz\n
"},{"location":"Build/dbsync/#test-running-dbsync-manually-at-terminal","title":"Test running dbsync manually at terminal","text":"In order to verify that you can run dbsync, before making a start - you'd want to ensure that you can run it interactively once. To do so, try the commands below:
cd $CNODE_HOME/scripts\nexport PGPASSFILE=$CNODE_HOME/priv/.pgpass\n./dbsync.sh\n
You can monitor logs if needed via parallel session using tail -10f $CNODE_HOME/logs/dbsync.json
. If there are no error, you would want to press Ctrl-C to stop the dbsync.sh execution and deploy it as a systemd service. To do so, use the commands below (the creation of file is done using sudo
permissions, but you can always deploy it manually):
cd $CNODE_HOME/scripts\n./dbsync.sh -d\n# Deploying cnode-dbsync.service as systemd service..\n# cnode-dbsync.service deployed successfully!!\n
Now to start dbsync instance, you can run sudo systemctl start cnode-dbsync
Note
Note that dbsync while syncs, it might defer creation of indexes/constraints to speed up initial catch up. Once relatively closer to tip, this will initiate creation of indexes - which can take a while in background. Thus, you might notice the query timings right after reaching to tip might not be as good.
"},{"location":"Build/dbsync/#update-dbsync","title":"Update DBSync","text":"Updating dbsync can have different tasks depending on the versions involved. We attempt to briefly explain the tasks involved:
sudo systemctl stop cnode-dbsync
)Go to your git folder, pull and checkout to latest version as in example below (if you were to switch to 13.1.1.3
):
cd ~/git/cardano-db-sync\ngit pull\ngit checkout 13.1.1.3\n
If going through major version update (eg: 13.x.x.x to 14.x.x.x), you might need to rebuild and resync db from scratch, you may still follow the section to restore using snapshot to save some time (as long as you use a compatible snapshot).
cardano-node
version has changed (specifically if it's ledger-state
schema is different), you'd also need to clear the ledger-state directory (eg: rm -rf $CNODE_HOME/guild-db/ledger-state
)dbsync.sh
starts up fine manually as described above. If it does, stop it and go ahead with startup of systemd service (i.e. sudo systemctl start cnode-dbsync
)To validate, connect to your postgres
instance and execute commands as per below:
export PGPASSFILE=$CNODE_HOME/priv/.pgpass\npsql cexplorer\n
You should be at the psql
prompt, you can check the tables and verify they're populated:
\\dt\nselect * from meta;\n
A sample output of the above two commands may look like below (the number of tables and names may vary between versions):
cexplorer=# \\dt\nList of relations\n Schema | Name | Type | Owner\n--------+---------------------------+-------+-------\n public | ada_pots | table | centos\n public | admin_user | table | centos\n public | block | table | centos\n public | delegation | table | centos\n public | delisted_pool | table | centos\n public | epoch | table | centos\n public | epoch_param | table | centos\n public | epoch_stake | table | centos\n public | ma_tx_mint | table | centos\n public | ma_tx_out | table | centos\n public | meta | table | centos\n public | orphaned_reward | table | centos\n public | param_proposal | table | centos\n public | pool_hash | table | centos\n public | pool_meta_data | table | centos\n public | pool_metadata | table | centos\n public | pool_metadata_fetch_error | table | centos\n public | pool_metadata_ref | table | centos\n public | pool_owner | table | centos\n public | pool_relay | table | centos\n public | pool_retire | table | centos\n public | pool_update | table | centos\n public | pot_transfer | table | centos\n public | reserve | table | centos\n public | reserved_ticker | table | centos\n public | reward | table | centos\n public | schema_version | table | centos\n public | slot_leader | table | centos\n public | stake_address | table | centos\n public | stake_deregistration | table | centos\n public | stake_registration | table | centos\n public | treasury | table | centos\n public | tx | table | centos\n public | tx_in | table | centos\n public | tx_metadata | table | centos\n public | tx_out | table | centos\n public | withdrawal | table | centos\n(37 rows)\n\n\n\nselect * from meta;\n id | start_time | network_name\n----+---------------------+--------------\n 1 | 2017-09-23 21:44:51 | mainnet\n(1 row)\n
"},{"location":"Build/graphql/","title":"Graphql","text":"!> We have stopped maintaining documentation for Cardano-GraphQL, and prefer use of PostgREST instead. The specific component does not follow the process/technology/language (requires npm, yarn) used by other components (cabal/stack), and the value provided by cardano-graphql
over the (haskell-based) hasura instance has been negligible. Also, an average pool operator may not require cardano-graphql at all, please verify if it is required for your use as mentioned here. The instructions below are out of date
.
Ensure the Pre-Requisites are in place before you proceed.
"},{"location":"Build/graphql/#build-hasura-graphql-engine","title":"Build Hasura graphql-engine","text":"Going with the spirit of the documentation here, instruction to build the graphql-engine binary :)
cd ~/git\ngit clone https://github.com/hasura/graphql-engine\ncd graphql-engine/server\n$CNODE_HOME/scripts/cabal-build-all.sh\n
This should make graphql-engine
available at ~/.local/bin."},{"location":"Build/graphql/#build-cardano-graphql","title":"Build cardano-graphql","text":"The build will fail if you are running a version of node.js earlier than 10.0.0 (which could happen if you have a conflicting version in your $PATH). You can verify your node version by executing the below:
#check your version of node.js\nnode -v\n#if response is 10.0.0 or higher build can proceed. \n
The commands below will help you compile the cardano-graphql node:
cd ~/git\ngit clone https://github.com/input-output-hk/cardano-graphql\ncd cardano-graphql\ngit checkout v1.1.1\nyarn\n#yarn install v1.22.4\n# [1/4] Resolving packages...\n# [2/4] Fetching packages...\n# info fsevents@2.1.2: The platform \"linux\" is incompatible with this module.\n# info \"fsevents@2.1.2\" is an optional dependency and failed compatibility check. Excluding it from installation.\n# info fsevents@1.2.12: The platform \"linux\" is incompatible with this module.\n# info \"fsevents@1.2.12\" is an optional dependency and failed compatibility check. Excluding it from installation.\n# [3/4] Linking dependencies...\n# warning \" > graphql-type-datetime@0.2.4\" has incorrect peer dependency \"graphql@^0.13.2\".\n# warning \" > @typescript-eslint/eslint-plugin@1.13.0\" has incorrect peer dependency \"eslint@^5.0.0\".\n# warning \" > @typescript-eslint/parser@1.13.0\" has incorrect peer dependency \"eslint@^5.0.0\".\n# [4/4] Building fresh packages...\n# Done in 20.70s.\nyarn build\n# yarn run v1.22.4\n# $ yarn codegen:internal && yarn codegen:external && tsc -p . && shx cp src/schema.graphql dist/\n# $ graphql-codegen\n# \u2714 Parse configuration\n# \u2714 Generate outputs\n# $ graphql-codegen --config ./codegen.external.yml\n# \u2714 Parse configuration\n# \u2714 Generate outputs\n# Done in 38.11s.\ncd dist\nrsync -arvh ../node_modules ./\n
"},{"location":"Build/graphql/#set-up-environment-for-cardano-graphql","title":"Set up environment for cardano-graphql","text":"cardano-graphql requires cardano-node, cardano-db-sync-extended, postgresql and graphql-engine to be set up and running. The below will help you map the components:
export PGPASSFILE=$CNODE_HOME/priv/.pgpass\nIFS=':' read -r -a PGPASS <<< $(cat $PGPASSFILE)\nexport HASURA_GRAPHQL_ENABLE_TELEMETRY=false # Optional. To send usage data to Hasura, set to true.\nexport HASURA_GRAPHQL_DATABASE_URL=postgres://${PGPASS[3]}:${PGPASS[4]}@${PGPASS[0]}:${PGPASS[1]}/${PGPASS[2]}\nexport HASURA_GRAPHQL_ENABLE_CONSOLE=true\nexport HASURA_GRAPHQL_ENABLED_LOG_TYPES=\"startup, http-log, webhook-log, websocket-log, query-log\"\nexport HASURA_GRAPHQL_SERVER_PORT=4080\nexport HASURA_GRAPHQL_SERVER_HOST=0.0.0.0\nexport CACHE_ENABLED=true\nexport HASURA_URI=http://127.0.0.1:4080\ncd ~/git/cardano-graphql/dist\ngraphql-engine serve &\nnode index.js\n
"},{"location":"Build/grest-changelog/","title":"Koios gRest Changelog","text":""},{"location":"Build/grest-changelog/#1010-for-all-networks","title":"[1.0.10] - For all networks.","text":"The release is effectively same as 1.0.10rc
except with one minor modification below.
cs.[{\"key\":\"value\"}]
in PostgREST #172This release primarily focuses on ability to support better DeFi projects alongwith some value addition for existing clients by bringing in 10 new endpoints (paired with 2 deprecations), and few additional optional input parameters , and some additional output columns to existing endpoints. The only breaking change/fix is for output returned for tx_info
.
Also, dbsync 13.1.x.x has been released and is recommended to be used for this release
"},{"location":"Build/grest-changelog/#new-endpoints-added","title":"New endpoints added","text":"/asset_addresses
- Equivalent of deprecated /asset_address_list
#149/asset_nft_address
- Returns address where the specified NFT sits on #149/account_utxos
- Returns brief details on non-empty UTxOs associated with a given stake address #149/asset_info_bulk
- Bulk version of /asset_info
#142/asset_token_registry
- Returns assets registered via token registry on github #145/credential_utxos
- Returns UTxOs associated with a payment credential #149/param_updates
- Returns list of parameter update proposals applied to the network #149/policy_asset_addresses
- Returns addresses with quantity for each asset on a given policy #149/policy_asset_info
- Equivalent of deprecated /asset_policy_info
but with more details in output #149/policy_asset_list
- Returns list of asset under the given policy (including supply) #142, #149/account_addresses
- Add optional _first_only
and _empty
flags to show only first address with tx or to include empty addresses to output #149/epoch_info
- Add optional _include_next_epoch
field to show next epoch stats if available (eg: nonce, active stake) #143/account_assets
, /address_assets
, /address_info
, /tx_info
, /tx_utxos
- Add decimals
to output #142/policy_asset_info
- Add minting_tx_hash
, total_supply
, mint_cnt
, burn_cnt
and creation_time
fields to the output #149/tx_info
- Change _invalid_before
and _invalid_after
to text field #141tx_info
- Remove the field plutus_contracts
> [array] > outputs
as there is no logic to connect it to inputs spending #163/asset_address_list
- Renamed to asset_addresses
keeping naming line with other endpoints (old one still present, but will be deprecated in future release) #149/asset_policy_info
- Renamed to policy_asset_info
keeping naming line with other endpoints (old one still present, but will be deprecated in future release) #149/epoch_info
, /epoch_params
- Restrict output to current epoch #149/block_info
- Use /previous_id
field to show previous/next blocks (previously was using block_id/height) #145/asset_info
/asset_policy_info
- Fix mint tx data to be latest #141grest.asset_info_cache
to hold mint/burn counts alongwith first/last mint tx/keys #142/pool_delegators
output column latest_delegation_tx_hash
#149authenticator
user, whose default statement_timeout
is set to 65s and update configs accordingly #1606This release is effectively same as 1.0.9rc
below (please check out the notes accordingly), just with minor bug fix on setup-grest.sh
itself.
This release candidate is non-breaking for existing methods and inputs, but breaking for output objects for endpoints. The aim with release candidate version is to allow folks couple of weeks to test, adapt their libraries before applying to mainnet.
"},{"location":"Build/grest-changelog/#new-endpoints-added_1","title":"New endpoints added","text":"datum_info
- List of datum information for given datum hashesaccount_info_cached
- Same as account_info
, but serves cached information instead of live oneaddress_info
, address_assets
, account_assets
, tx_info
, asset_list
asset_summary
to align output asset_list
object to return array of policy_id
, asset_name
, fingerprint
(and quantity
, minting_txs
where applicable) #120asset_history
- Fix metadata to wrap in array to refer to right object #122asset_txs
- Add optional boolean parameter _history
(default: false
) to toggle between querying current UTxO set vs entire history for asset #122pool_history
- fixed_cost
, pool_fees
, deleg_rewards
, epoch_ros
will be returned as 0 when null #122tx_info
- plutus_contracts->outputs
can be null #122guild-operators
repository to koios-artifacts
repository. This is to ensure that the updates made to scripts and other tooling do not have a dependency on Koios query versioning #122block_info
- Use block_no
instead of id
to check for previous/next block hash #122This release is contains minor bug-fixes that were discovered in koios-1.0.7. No major changes to output for this one.
"},{"location":"Build/grest-changelog/#changes-for-api","title":"Changes for API","text":""},{"location":"Build/grest-changelog/#new-endpoints-added_2","title":"New endpoints added","text":"tx_info
and tx_metadata
- Align metadata for JSON output format #1542blocks
- Query Output aligned to specs (epoch
=> epoch_no
)epoch_block_protocols
- [ ** Specs only ** ] Fix Documentation schema , which was accidentally showing wrong outputpool_delegators_history
- List all epochs instead of current, if no _epoch_no
is specified #1545asset_info
- Fix metadata aggregaton for minting transactions with multiple metadata keys #1543stake_distribution_new_accounts
- Leftover reference for account_info
which now accepts array, resulted in error to populate stake distribution cache for new accounts #1541grest-poll.sh
- Remove query view section from polling script, and remove grestrpcs re-creation per hour (it's already updated when setup-grest.sh
is run) , in preparation for #1545This release continues updates from koios-1.0.6 to further utilise stake-snapshot cache tables which would be useful for SPOs as well as reduce downtime post epoch transition. One largely requested feature to accept bulk inputs for many block/address/account endpoints is now complete. Additionally, koios instance providers are now recommended to use cardano-node 1.35.3 with dbsync 13.0.5.
"},{"location":"Build/grest-changelog/#changes-for-api_1","title":"Changes for API","text":""},{"location":"Build/grest-changelog/#new-endpoints-added_3","title":"New endpoints added","text":"pool_delegators_history
- Provides historical record for pool's delegators #1486pool_stake_snapshot
- Provides mark, set and go snapshot values for pool being queried. #1489pool_delegators
- No longer accepts _epoch_no
as parameter, as it only returns live delegators. Additionally provides latest_delegation_hash
in output. #1486tx_info
- epoch
=> epoch_no
#1494tx_info
- Change collateral_outputs
(array) to collateral_output
(object) as collateral output is only singular in current implementation #1496address_info
- Add inline_datum
and reference_script
to output #1500pool_info
- Add sigma
field to output #1511pool_updates
- Add historical metadata information to output #1503_stake_address text
becomes _stake_addresses text[]
). The additional changes in output as below:block_txs
- Now returns block_hash
and array of tx_hashes
address_info
- Additional field address
returned in outputaddress_assets
- Now returns address
and an array of assets
JSONaccount_addresses
- Accepts stake_addresses
array and outputs stake_address
and array of addresses
account_assets
- Accepts stake_addresses
array and outputs stake_address
and array of assets
JSONaccount_history
- Accepts stake_addresses
array alongwith epoch_no
integer and outputs stake_address
and array of history
JSONaccount_info
- Accepts stake_addresses
array and returns additional field stake_address
to outputaccount_rewards
- Now returns stake_address
and an array of rewards
JSONaccount_updates
- Now returns stake_address
and an array of updates
JSONasset_info
- Change minting_tx_metadata
from array to object #1533account_addresses
- Sort results by oldest address first #1538epoch_info_cache
- Only update last_tx_id of previous epoch on epoch transition #1490 and #1502epoch_info_cache
/ stake_snapshot_cache
- Store total snapshot stake to epoch stake cache, and active pool stake to stake snapshot cache #1485The backlog of items not being added to mainnet has been increasing due to delays with Vasil HFC event to Mainnet. As such we had to come up with a split update approach. The mainnet nodes are still not qualified to be Vasil-ready (in our opinion) for 1.35.x , but dbsync 13 can be used against node 1.34.1 fine. In order to cater for this split, we have added an intermediate koios-1.0.6m tag that brings dbsync updates while maintaining node-1.34.1.
"},{"location":"Build/grest-changelog/#changes-for-api_2","title":"Changes for API","text":""},{"location":"Build/grest-changelog/#data-output-changes","title":"Data Output Changes","text":"pool_delegators
- epoch_no
=> active_epoch_no
#1454asset_history
- Add block_time
and metadata
fields for all previous mint transactions #1468asset_info
- Retain latest mint transaction instead of first (in line with most CIPs as well as pool metadata - latest valid meta being live) #1468/tip
, /blocks
, /block_info
=> block_time
/genesis
=> systemStart
/epoch_info
=> start_time
, first_block_time
, last_block_time
, end_time
/tx_info
=> tx_timestamp
/asset_info
=> creation_time
tx_info
- Add Vasil data #1464collaterals
=> collateral_inputs
collateral_outputs
, reference_inputs
to tx_info
datum_hash
, inline_datum
, reference_script
to collateral input/outputs, reference inputs & inputs/outputs JSON.cost_model
instead of cost_model_id
referenceepoch_params
- Update leftover lovelace references to text for consistency: #1484key_deposit
pool_deposit
min_utxo_value
min_pool_cost
coins_per_utxo_size
get-metrics.sh
- Add active/idle connections to database #1459grest-poll.sh
: Bump haproxy to 2.6.1 and set default value of API_STRUCT_DEFINITION to be dependent on network used. #1450grest.account_active_stake_cache
- optimise code and delete historical view (beyond 4 epochs). [#1451(https://github.com/cardano-community/guild-operators/pull/1451)tx_metalabels
- Move metalabels from view to RPC using lose indexscan (much better performance) #1474grest.stake_snapshot_cache
- Fix rewards for new accounts #1476Since there have been a few deviations wrt Vasil for testnet and mainnet, this version only targets networks except Mainnet!
"},{"location":"Build/grest-changelog/#changes-for-api_3","title":"Changes for API","text":""},{"location":"Build/grest-changelog/#data-output-changes_1","title":"Data Output Changes","text":"/epoch_info
- Add total_rewards
and avg_block_reward
for a given epoch #43/tip
, /blocks
, /block_info
=> block_time
/genesis
=> systemStart
/epoch_info
=> start_time
, first_block_time
, last_block_time
, end_time
/tx_info
=> tx_timestamp
/asset_info
=> creation_time
/blocks
, /block_info
=> Add proto_major
and proto_minor
for a given block to output #55asset_registry_update.sh
script to rely on commit hash instead of POSIX timestamps, and performance bump. #1428epoch_no
, block_no
to /address_txs
, /credential_txs
and /asset_txs
endpoints. #1409/asset_txs
, returning transactions as an array - allows for leveraging native PostgREST filtering. #1409/pool_info
. #1414setup-grest.sh
with -r
(reset flag), as the delta registry records to insert depends on file (POSIX) timestamps. #1410grest-poll.sh
. Important
gRest is an open source implementation of a query layer built over dbsync using PostgREST and HAProxy
. The package is built as part of Koios team's efforts to unite community individual stream of work together and give back a more aligned structure to query dbsync and adopt standardisation to queries utilising open-source tooling as well as collaboration. In addition to these, there are also accessibility features to deploy rules for failover, do healthchecks, set up priorities, have ability to prevent DDoS attacks, provide timeouts, report tips for analysis over a longer period, etc - which can prove to be really useful when performing any analysis for instances.
Note
Note that the scripts below do allow for provisioning ogmios integration too, but Ogmios - currently - is not designed to provide advanced session management for a server-client architecture in absence of a middleware. Thus, the availability for ogmios from monitoring instance is restricted to avoid ability to DDoS an instance.
"},{"location":"Build/grest/#components","title":"Components","text":"PostgREST: An RPC JSON interface for any PostgreSQL database (in our case, database served via cardano-db-sync
) to provide a RESTful Web Service. The endpoints of PostgREST in itself are essentially the table/functions defined in elected schema via grest config file. You can read more about advanced query syntax using PostgREST API here, but we will provide a simpler view using examples towards the end of the page. It is an easy alternative - with almost no overhead as it directly serves the underlying database as an API, as compared to Cardano GraphQL
component (which may often have lags). Some of the other advantages of PostgREST over graphql based projects are also performance, being stateless, 0 overhead, support for JWT / native Postgres DB authentication against the Rest Interface as well.
HAProxy: An easy gateway proxy that automatically provides failover/basic DDoS protection, specify rules management for load balancing, setup multiple frontend/backends, provide easy means to have TLS enabled for public facing instances, etc. You may alter the settings for proxy layer as per your SecOps preferences. This component is optional (eg: if you prefer to expose your PostgREST server itself, you can do so using similar steps below).
To start with you'd want to ensure your current shell session has access to Postgres credentials, continuing from examples from the above mentioned Sample Postgres deployment guide.
cd $CNODE_HOME/priv\nPGPASSFILE=$CNODE_HOME/priv/.pgpass\npsql cexplorer\n
Ensure that you can connect to your Postgres DB fine using above (quit from psql once validated using \\q
). As part of guild-deploy.sh
execution, you'd find setup-grest.sh file made available in ${CNODE_HOME}/scripts
folder, which will help you automate installation of PostgREST, HAProxy as well as brings in latest queries/functions provided via Koios to your instances.
Warning
As of now, gRest services are in alpha stage - while can be utilised, please remember there may be breaking changes and every collaborator is expected to work with the team to keep their instances up-to-date using alpha branch.
Familiarise with the usage options for the setup script , the syntax can be viewed as below:
cd \"${CNODE_HOME}\"/scripts\n./setup-grest.sh -h\n#\n# Usage: setup-grest.sh [-f] [-i [p][r][m][c][d]] [-u] [-b <branch>]\n# \n# Install and setup haproxy, PostgREST, polling services and create systemd services for haproxy, postgREST and dbsync\n# \n# -f Force overwrite of all files including normally saved user config sections\n# -i Set-up Components individually. If this option is not specified, components will only be installed if found missing (eg: -i prcd)\n# p Install/Update PostgREST binaries by downloading latest release from github.\n# r (Re-)Install Reverse Proxy Monitoring Layer (haproxy) binaries and config\n# m Install/Update Monitoring agent scripts\n# c Overwrite haproxy, postgREST configs\n# d Overwrite systemd definitions\n# -u Skip update check for setup script itself\n# -q Run all DB Queries to update on postgres (includes creating grest schema, and re-creating views/genesis table/functions/triggers and setting up cron jobs)\n# -b Use alternate branch of scripts to download - only recommended for testing/development (Default: master)\n#\n
To run the setup overwriting all standard deployment tasks from a branch (eg: koios-1.0.9
branch), you may want to use:
./setup-grest.sh -f -i prmcd -r -q -b koios-1.0.9\n
Similarly - if you'd like to re-install all components and force overwrite all configs but not reset cache tables, you may run:
./setup-grest.sh -f -i prmcd -q\n
Another example could be to preserve your config, but only update queries using an alternate branch (eg: let's say you want to try the branch alpha
prior to a tagged release). To do so, you may run:
./setup-grest.sh -q -b alpha\n
Please ensure to follow the on-screen instructions, if any (for example restarting deployed services, or updating configs to specify correct target postgres URLs/enable TLS/add peers etc in ${CNODE_HOME}/priv/grest.conf
and ${CNODE_HOME}/files/haproxy.cfg
).
The default ports used will make haproxy instance available at port 8053 or 8453 if TLS is enabled (you might want to enable firewall rule to open this port to services you would like to access). If you want to prevent unauthenticated access to grest schema, uncomment the jwt-secret and specify a custom secret-token
.
Reminder
Once you've successfully deployed the grest instance, it will deploy certain cron jobs that will ensure the relevant cache tables are updated periodically. Until these have finished (especially on first run, it could take an hour or so on mainnet, your instance will likely not pass any tests from grest-poll.sh
but that's expected.
In order to enable SSL on your haproxy, all you need to do is edit the file ${CNODE_HOME}/files/haproxy.cfg
and update the frontend app section to uncomment ssl bind (and comment normal bind).
Info
If you're not familiar with how to configure TLS OR would not like to buy one, you can find tips on how to create a TLS certificate for free via LetsEncrypt using tutorials here. Once you do have a TLS Certificate generated, you need to chain the private key and full chain cert together in a file - /etc/ssl/server.pem
- which can be then referenced as below:
frontend app\n #bind 0.0.0.0:8053\n ## If using SSL, comment line above and uncomment line below\n bind :8453 ssl crt /etc/ssl/server.pem no-sslv3\n http-request set-log-level silent\n acl srv_down nbsrv(grest_postgrest) eq 0\n acl is_wss hdr(Upgrade) -i websocket\n ...\n
Restart haproxy service for changes to take effect."},{"location":"Build/grest/#validation","title":"Validation","text":"With the setup, you also have a checkstatus.sh
script, which will query the Postgres DB instance via haproxy (coming through postgREST), and only show an instance up if the latest block in your DB instance is within 180 seconds.
Important
If you'd like to participate in joining to the elastic cluster via Koios, please raise a PR request by editing topology files in this folder to do so!!
If you were using guild
network, you could do a couple of very basic sanity checks as per below:
To query active stake for pool pool1z2ry6kxywgvdxv26g06mdywynvs7jj3uemnxv273mr5esukljsr
in epoch 122
, we can execute the below:
curl -d _pool_bech32=pool1z2ry6kxywgvdxv26g06mdywynvs7jj3uemnxv273mr5esukljsr -d _epoch_no=122 -s http://localhost:8053/rpc/pool_active_stake\n## {\"active_stake_sum\" : 19409732875}\n
To check latest owner key(s) for a given pool pool1z2ry6kxywgvdxv26g06mdywynvs7jj3uemnxv273mr5esukljsr
, you can execute the below:
curl -d _pool_bech32=pool1z2ry6kxywgvdxv26g06mdywynvs7jj3uemnxv273mr5esukljsr -s http://localhost:8050/rpc/pool_owners\n## [{\"owner\" : \"stake_test1upx5p04dn3t6dvhfh27744su35vvasgaaq565jdxwlxfq5sdjwksw\"}, {\"owner\" : \"stake_test1uqak99cgtrtpean8wqwp7d9taaqkt9gkkxga05m5azcg27chnzfry\"}]\n
You may want to explore what all endpoints come out of the box, and test them out, to do so - refer to API documentation for OpenAPI3 documentation. Each endpoint has a pre-filled example for mainnet and connects by default to primary Koios endpoint, allowing you to test endpoints and if needed - grab the curl
commands to start testing yourself against your local or remote instances.
If you're interested to participate in decentralised infrastructure by providing an instance, there are a few additional steps you'd need:
Enable ports for your HAProxy instance (default: 8053), gRest Exporter service (default: 8059) and (optionally) submit API instance (default: 8090) against the monitoring instance (do not need to open these ports to internet) of corresponding network.
Ensure that each of the service above is listening on your public IP address (for instance, submitapi.sh might need to be edited to change HOSTADDR to 0.0.0.0
and restarted).
Create a PR specifying connectivity information to your HAProxy port here.
Make sure to join the telegram discussions group to participate in any discussions, actions, polls for new-features, etc. Feel free to give a shout in the group in case you have trouble following any of the above
Reminder !!
Ensure the Pre-Requisites are in place before you proceed.
"},{"location":"Build/node-cli/#build-instructions","title":"Build Instructions","text":""},{"location":"Build/node-cli/#clone-the-repository","title":"Clone the repository","text":"Execute the below to clone the cardano-node repository to $HOME/git
folder on your system:
cd ~/git\ngit clone https://github.com/input-output-hk/cardano-node\ncd cardano-node\n
"},{"location":"Build/node-cli/#build-cardano-node","title":"Build Cardano Node","text":"You can use the instructions below to build the latest release of cardano-node.
git fetch --tags --all\ngit pull\n# Replace tag against checkout if you do not want to build the latest released version\ngit checkout $(curl -s https://api.github.com/repos/input-output-hk/cardano-node/releases/latest | jq -r .tag_name)\n\n# Use `-l` argument if you'd like to use system libsodium instead of IOG fork of libsodium while compiling\n$CNODE_HOME/scripts/cabal-build-all.sh\n
The above would copy the binaries built into ~/.local/bin
folder.
While certain folks might want to build the node themselves (could be due to OS/arch compatibility, trust factor or customisations), for most it might not make sense to build the node locally. Instead, you can download the binaries using cardano-node release notes, where-in you can find the download links for every version. Once downloaded, you would want to make it available to preferred PATH
in your environment (if you're asking how - that'd mean you've skipped skillsets mentioned on homepage).
Execute cardano-cli
and cardano-node
to verify output as below (the exact version and git rev should depend on your checkout tag on github repository):
cardano-cli version\n# cardano-cli 8.1.2 - linux-x86_64 - ghc-8.10\n# git rev <...>\ncardano-node version\n# cardano-node 8.1.2 - linux-x86_64 - ghc-8.10\n# git rev <...>\n
"},{"location":"Build/node-cli/#update-port-number-or-pool-name-for-relative-paths","title":"Update port number or pool name for relative paths","text":"Before you go ahead with starting your node, you may want to update values for CNODE_PORT
in $CNODE_HOME/scripts/env
. Note that it is imperative for operational relays and pools to ensure that the port mentioned is opened via firewall to the destination your node is supposed to connect from. Update your network/firewall configuration accordingly. Future executions of guild-deploy.sh
will preserve and not overwrite these values.
CNODEBIN=\"${HOME}/.local/bin/cardano-node\"\nCCLI=\"${HOME}/.local/bin/cardano-cli\"\nCNODE_PORT=6000\nPOOL_NAME=\"GUILD\"\n
Important
POOL_NAME is the name of folder that you will use when registering pools and starting node in core mode. This folder would typically contain your hot.skey
,vrf.skey
and op.cert
files required. If the mentioned files are absent, the node will automatically start in a passive mode. Note that in case CNODE_PORT is changed, you'd want to re-do the deployment of systemd service as mentioned later in the guide
To test starting the node in interactive mode, you can use the pre-built script below (cnode.sh
) (note that your node logs are being written to $CNODE_HOME/logs
folder, you may not see much output beyond Listening on http://127.0.0.1:12798
). This script automatically determines whether to start the node as a relay or block producer (if the required pool keys are present in the $CNODE_HOME/priv/pool/<POOL_NAME>
as mentioned above). The script contains a user-defined variable CPU_CORES
which determines the number of CPU cores the node will use upon start-up:
######################################\n# User Variables - Change as desired #\n# Common variables set in env file #\n######################################\n\n#CPU_CORES=2 # Number of CPU cores cardano-node process has access to (please don't set higher than physical core count, 2-4 recommended)\n
You can uncomment this and set to the desired number, but be wary not to go above your physical core count. cd \"${CNODE_HOME}\"/scripts\n./cnode.sh\n
Ensure you do not have any errors in the console. To stop the node, hit Ctrl-C - we will start the node as systemd later in the document.
"},{"location":"Build/node-cli/#modify-the-node-to-p2p-mode","title":"Modify the node to P2P mode","text":"Note
The section below only refer to mainnet, as Guildnet/Preview/Preprod templates already come with P2P as default mode, and do not require steps below
In case you prefer to start the node in P2P mode (ideally, only on relays), you can do so by replacing the config.json and topology.json files in $CNODE_HOME/files
folder. You can find a sample of these two files that can be downloaded using commands below:
cd \"${CNODE_HOME}\"/files\nmv config.json config.json.bkp_$(date +%s)\nmv topology.json topology.json.bkp_$(date +%s)\ncurl -sL -f \"https://raw.githubusercontent.com/cardano-community/guild-operators/master/files/config-mainnet.p2p.json\" -o config.json\ncurl -sL -f \"https://raw.githubusercontent.com/cardano-community/guild-operators/alpha/files/topology-mainnet.json\" -o topology.json\n
Once downloaded, you'd want to update config.json (if you want to update any port/path references or change tracers from default) and the topology.json file to include your core/relay nodes in localRoots
section (replacing dummy values currently in place with \"127.0.0.1\"
address. The P2P topology file provides you few public nodes as a fallback to avoid single point of reliance, being IO provided mainnet nodes. You can also remove/update any additional peers as per your preference.
Once updated, since you modified the file manually - there is always a chance of human errors (eg: missing comma/quotes). Thus, we would recommend you to start the node interactively once again before proceeding.
cd \"${CNODE_HOME}\"/scripts\n./cnode.sh\n
Ensure you do not have any errors in the console. To stop the node, hit Ctrl-C - we will start the node as systemd later in the document.
Note
An average pool operator may not require cardano-submit-api
at all. Please verify if it is required for your use as mentioned here. If - however - you do run submit-api for accepting sizeable transaction load, you would want to override the default MEMPOOL_BYTES by uncommenting it in cnode.sh.
cardano-submit-api
is one of the binaries built as part of cardano-node
repository and allows you to submit transactions over a Web API. To run this service interactively, you can use the pre-built script below (submitapi.sh
). Make sure to update submitapi.sh
script to change listen IP or Port that you'd want to make this service available on.
cd $CNODE_HOME/scripts\n./submitapi.sh\n
To stop the process, hit Ctrl-C
"},{"location":"Build/node-cli/#systemd","title":"Run as systemd service","text":"The preferred way to run the node (and submit-api) is through a service manager like systemd. This section explains how to setup a systemd service file.
1. Deploy as a systemd service Execute the below command to deploy your node as a systemd service (from the respective scripts folder):
cd $CNODE_HOME/scripts\n./cnode.sh -d\n# Deploying cnode.service as systemd service..\n# cnode.service deployed successfully!!\n\n./submitapi.sh -d\n# Deploying cnode-submit-api.service as systemd service..\n# cnode-submit-api deployed successfully!!\n
2. Start the service Run below commands to enable automatic start of service on startup and start it.
sudo systemctl start cnode.service\nsudo systemctl start cnode-submit-api.service\n
3. Check status and stop/start commands Replace status
with stop
/start
/restart
depending on what action to take.
sudo systemctl status cnode.service\nsudo systemctl status cnode-submit-api.service\n
Important
In case you see the node exit unsuccessfully upon checking status, please verify you've followed the transition process correctly as documented below, and that you do not have another instance of node already running. It would help to check your system logs (/var/log/syslog
for debian-based and /var/log/messages
for Red Hat/CentOS/Fedora systems, you can also check journalctl -f -u <service>
to examine startup attempt for services) for any errors while starting node.
You can use gLiveView to monitor your node that was started as a systemd service.
cd $CNODE_HOME/scripts\n./gLiveView.sh\n
"},{"location":"Build/offchain-metadata-tools/","title":"Offchain Metadata Tools","text":"Important
In the Cardano multi-asset era, this project helps you create and submit metadata describing your assets, storing them off-chain.
"},{"location":"Build/offchain-metadata-tools/#download-pre-built-binaries","title":"Download pre-built binaries","text":"Go to input-output-hk/offchain-metadata-tools to download the binaries and place in a directory specified by PATH
, e.g. $HOME/.local/bin/
.
An alternative to pre-built binaries - instructions describe how to build the token-metadata-creator
tool but the offchain-metadata-tools repository contains other tools as well. Build the ones needed for your installation.
Execute the below to clone the offchain-metadata-tools repository to $HOME/git folder on your system:
cd ~/git\ngit clone https://github.com/input-output-hk/offchain-metadata-tools.git\ncd offchain-metadata-tools/token-metadata-creator\n
"},{"location":"Build/offchain-metadata-tools/#build-token-metadata-creator","title":"Build token-metadata-creator","text":"You can use the instructions below to build token-metadata-creator
, same steps can be executed in future to update the binaries (replacing appropriate tag) as well.
git fetch --tags --all\ngit pull\n# Replace master with appropriate tag if you'd like to avoid compiling against master\ngit checkout master\n$CNODE_HOME/scripts/cabal-build-all.sh\n
The above would copy the binaries into ~/.local/bin
folder."},{"location":"Build/offchain-metadata-tools/#verify","title":"Verify","text":"Verify that the tool is executable from anywhere by running:
token-metadata-creator -h\n
"},{"location":"Build/wallet/","title":"Wallet","text":"!> - An average pool operator may not require cardano-wallet
at all. Please verify if it is required for your use as mentioned here.
Ensure the Pre-Requisites are in place before you proceed.
"},{"location":"Build/wallet/#build-instructions","title":"Build Instructions","text":"Follow instructions below for building the cardano-wallet binary:
"},{"location":"Build/wallet/#clone-the-repository","title":"Clone the repository","text":"Execute the below to clone the cardano-wallet
repository to $HOME/git
folder on your system:
cd ~/git\ngit clone https://github.com/input-output-hk/cardano-wallet\ncd cardano-wallet\n
"},{"location":"Build/wallet/#build-cardano-wallet","title":"Build Cardano Wallet","text":"You can use the instructions below to build the latest release of cardano-wallet.
!> - Note that the latest release of cardano-wallet
may not work with the latest release of cardano-node
. Please check the compatibility of each cardano-wallet
release yourself in the official docs, e.g. https://github.com/input-output-hk/cardano-wallet/releases/latest.
git fetch --tags --all\ngit pull\n# Replace tag against checkout if you do not want to build the latest released version\ngit checkout $(curl -s https://api.github.com/repos/input-output-hk/cardano-wallet/releases/latest | jq -r .tag_name)\n$CNODE_HOME/scripts/cabal-build-all.sh\n
The above would copy the binaries into ~/.local/bin
folder.
You can run the below to connect to a cardano-node
instance that is expected to be already running and the wallet will start syncing.
cardano-wallet serve /\n --node-socket $CNODE_HOME/sockets/node0.socket /\n --mainnet / # if using the testnet flag you also need to specify the testnet shelley-genesis.json file\n--database $CNODE_HOME/priv/wallet\n
"},{"location":"Build/wallet/#verify-the-wallet-is-handling-requests","title":"Verify the wallet is handling requests","text":"cardano-wallet network information\n
Expected output should be similar to the following Ok.\n{\n\"network_tip\": {\n\"time\": \"2021-06-01T17:31:05Z\",\n\"epoch_number\": 269,\n\"absolute_slot_number\": 31002374,\n\"slot_number\": 157574\n},\n\"node_era\": \"mary\",\n\"node_tip\": {\n\"height\": {\n\"quantity\": 5795127,\n\"unit\": \"block\"\n},\n\"time\": \"2021-06-01T17:31:00Z\",\n\"epoch_number\": 269,\n\"absolute_slot_number\": 31002369,\n\"slot_number\": 157569\n},\n\"sync_progress\": {\n\"status\": \"ready\"\n},\n\"next_epoch\": {\n\"epoch_start_time\": \"2021-06-04T21:44:51Z\",\n\"epoch_number\": 270\n}\n}\n
"},{"location":"Build/wallet/#creatingrestoring-wallet","title":"Creating/Restoring Wallet","text":"If you're creating a new wallet, you'd first want to generate a mnemonic for use (see below):
cardano-wallet recovery-phrase generate\n# false brother typical saddle settle phrase foster sauce ask sunset firm gate service render burger\n
You can use the above mnemonic to then restore a wallet as per below: cardano-wallet wallet create from-recovery-phrase MyWalletName\n
"},{"location":"Build/wallet/#expected-output","title":"Expected output:","text":"Please enter a 15\u201324 word recovery phrase: false brother typical saddle settle phrase foster sauce ask sunset firm gate service render burger\n(Enter a blank line if you do not wish to use a second factor.)\nPlease enter a 9\u201312 word second factor:\nPlease enter a passphrase: **********\nEnter the passphrase a second time: **********\nOk.\n{\n ...\n}\n
"},{"location":"Scripts/blockperf/","title":"BlockPerf","text":"Reminder !!
Ensure the Pre-Requisites are in place before you proceed.
blockPerf.sh
is a script to monitor the network propagation of new blocks as seen by the local cardano-node.
Although blockPerf can also run on the block producer, it makes the most sense to run it on the upstream relays. There it waits for each new block announced to the relay over the network by its remote peers.
It looks for the delay times that result
You can view this data locally as a console stream, or run it as a systemd service in background.
BlockPerf also sends this data to the TopologyUpdater server, so that there is a possibility to compare this data (similar to sendtip to pooltool). As a contributing operator you get the possibility to see how your own relays compare to other nodes regarding receive quality, delay times and thus performance.
There is no connection or constraint between the TopologyUpdater Relay subscription and the BlockPerf analysis. BlockPerf is even designed to work outside the cnTools suite.
The results of these data are a good basis to make optimizations and to evaluate which changes were useful or might by required to improve the performance compared to other relay nodes.
"},{"location":"Scripts/blockperf/#installation","title":"Installation","text":"The script is best run as a background process. This can be accomplished in many ways but the preferred method is to run it as a systemd service. A terminal multiplexer like tmux or screen could also be used but not covered here.
"},{"location":"Scripts/blockperf/#run-as-service","title":"Run as service","text":"Use the deploy-as-systemd.sh
script to create a systemd unit file. In this setup the script is started in \"service\" mode. Error/Warn level log output is handled by syslog and end up in the systems standard syslog file, normally /var/log/syslog
. journalctl -f -u cnode-tu-blockperf.service
can be used to check service output (follow mode).
Outside the cnTools environment call blockPerf.sh -d
to install it as a systemd service.
If you run blockPerf local in the console (scripts/blockPerf.sh
) , immediately after the appearance of a new block it shows where it came from, how many slots away from the previous block it was, and how many milliseconds the individual steps took.
Block:.... 6860534\n Slot..... 52833850 (+59s)\n ......... 2022-02-09 09:49:01\n Header... 2022-02-09 09:49:02,780 (+1780 ms)\n Request.. 2022-02-09 09:49:02,780 (+0 ms)\n Block.... 2022-02-09 09:49:02,830 (+50 ms)\n Adopted.. 2022-02-09 09:49:02,900 (+70 ms)\n Size..... 79976 bytes\n delay.... 1.819971868 sec\n From..... 104.xxx.xxx.61:3001\n\nBlock:.... 6860535\n Slot..... 52833857 (+7s)\n ......... 2022-02-09 09:49:08\n Header... 2022-02-09 09:49:08,960 (+960 ms)\n Request.. 2022-02-09 09:49:08,970 (+10 ms)\n Block.... 2022-02-09 09:49:09,020 (+50 ms)\n Adopted.. 2022-02-09 09:49:09,090 (+70 ms)\n Size..... 64950 bytes\n delay.... 1.028341023 sec\n From..... 34.xxx.xxx.15:4001\n
"},{"location":"Scripts/blockperf/#collaborative-web-view","title":"Collaborative web view","text":"A further aim of the blockPerf project is to use the data that individual nodes send to the central TopologyUpdater database to produce graphical visualisations and evaluations that provide the participating node operators with useful insights into their performance compared to all others.
"},{"location":"Scripts/cncli/","title":"CNCLI","text":"Reminder !!
Ensure the Pre-Requisites are in place before you proceed.
cncli.sh
is a script to download and deploy CNCLI created and maintained by Andrew Westberg. It's a community-based CLI tool written in RUST for low-level cardano-node
communication. Usage is optional and no script is dependent on it. The main features include:
gLiveView
for peer analysis if available. sqlite
database. firstSlotOfNextEpoch - (3 * k / f)
).cncli.sh
script's main functions, sync
, leaderlog
, validate
and PoolTool sendslots
/sendtip
are not meant to be run manually, but instead deployed as systemd services that run in the background to do the block scraping and validation automatically. Additional commands exist for manual execution to initiate the sqlite
db, filling the blocklog DB with all blocks created by the pool known to the blockchain, migration of old cntoolsBlockCollector JSON blocklog, and re-validation of blocks and leaderlogs. See usage output below for a complete list of available commands.
The script works in tandem with Log Monitor to provide faster adopted status but mainly to catch slots the node is leader for but are unable to create a block for. These are marked as invalid. Blocklog will however work fine without the logMonitor
service and CNCLI
is able to handle everything except catching invalid blocks.
guild-deploy.sh
with guild-deploy.sh -s c
to download and install RUST and CNCLI. IOG fork of libsodium required by CNCLI is automatically compiled by CNCLI build process. If a previous installation is found, RUST and CNCLI will be updated to the latest version.deploy-as-systemd.sh
to deploy the systemd services that handle all the work in the background. Six systemd services in total are deployed whereof four are related to CNCLI. See above for the different purposes they serve.If you want to disable some of the deployed services, run sudo systemctl disable <service>
cnode.service
(main cardano-node
launcher)
cnode-cncli-sync.service
cnode-cncli-leaderlog.service
cnode-cncli-validate.service
cnode-cncli-ptsendtip.service
cnode-cncli-ptsendslots.service
cnode-logmonitor.service
(see Log Monitor)You can override the values in the script at the User Variables section shown below. POOL_ID, POOL_VRF_SKEY and POOL_VRF_VKEY should automatically be detected if POOL_NAME
is set in the common env
file and can be left commented. PT_API_KEY and POOL_TICKER need to be set in the script if PoolTool sendtip
/sendslots
are to be used before starting the services. For the rest of the commented values, if the defaults do not provide the right values, uncomment and make adjustments.
#POOL_ID=\"\" # Automatically detected if POOL_NAME is set in env. Required for leaderlog calculation & pooltool sendtip, lower-case hex pool id\n#POOL_VRF_SKEY=\"\" # Automatically detected if POOL_NAME is set in env. Required for leaderlog calculation, path to pool's vrf.skey file\n#POOL_VRF_VKEY=\"\" # Automatically detected if POOL_NAME is set in env. Required for block validation, path to pool's vrf.vkey file\n#PT_API_KEY=\"\" # POOLTOOL sendtip: set API key, e.g \"a47811d3-0008-4ecd-9f3e-9c22bdb7c82d\"\n#POOL_TICKER=\"\" # POOLTOOL sendtip: set the pools ticker, e.g. \"TCKR\"\n#PT_HOST=\"127.0.0.1\" # POOLTOOL sendtip: connect to a remote node, preferably block producer (default localhost)\n#PT_PORT=\"${CNODE_PORT}\" # POOLTOOL sendtip: port of node to connect to (default is CNODE_PORT from the env file)\n#CNCLI_DIR=\"${CNODE_HOME}/guild-db/cncli\" # path to the directory for cncli sqlite db\n#SLEEP_RATE=60 # CNCLI leaderlog/validate: time to wait until next check (in seconds)\n#CONFIRM_SLOT_CNT=600 # CNCLI validate: require at least these many slots to have passed before validating\n#CONFIRM_BLOCK_CNT=15 # CNCLI validate: require at least these many blocks on top of minted before validating\n#TIMEOUT_LEDGER_STATE=300 # CNCLI leaderlog: timeout in seconds for ledger-state query\n#BATCH_AUTO_UPDATE=N # Set to Y to automatically update the script if a new version is available without user interaction\n
"},{"location":"Scripts/cncli/#run","title":"Run","text":"Services are controlled by sudo systemctl <status|start|stop|restart> <service name>
All services are configured as child services to cnode.service
and as such, when an action is taken against this service it's replicated to all child services. E.g running sudo systemctl start cnode.service
will also start all child services.
Log output is handled by syslog and end up in the systems standard syslog file, normally /var/log/syslog
. journalctl -f -u <service>
can be used to check service output (follow mode). Other logging configurations are not covered here.
Recommended workflow to get started with CNCLI blocklog.
$CNODE_HOME/scripts/cncli.sh migrate <path>
where is the location to the directory containing all blocks_.json files. sudo systemctl start cnode-cncli-sync.service
(starts leaderlog
, validate
& ptsendslots
automatically)sudo systemctl start cnode-logmonitor.service
sudo systemctl start cnode-cncli-ptsendtip.service
(optional but recommended)sudo systemctl restart cnode.service
$CNODE_HOME/scripts/cncli.sh init
Usage: cncli.sh [operation <sub arg>]\nScript to run CNCLI, best launched through systemd deployed by 'deploy-as-systemd.sh'\n\nsync Start CNCLI chainsync process that connects to cardano-node to sync blocks stored in SQLite DB (deployed as service)\nleaderlog One-time leader schedule calculation for current epoch, then continuously monitors and calculates schedule for coming epochs, 1.5 days before epoch boundary on the mainnet (deployed as service)\n force Manually force leaderlog calculation and overwrite even if already done, exits after leaderlog is calculated\nvalidate Continuously monitor and confirm that the blocks made actually was accepted and adopted by chain (deployed as service)\n all One-time re-validation of all blocks in blocklog db\n epoch One-time re-validation of blocks in blocklog db for the specified epoch \nptsendtip Send node tip to PoolTool for network analysis and to show that your node is alive and well with a green badge (deployed as service)\nptsendslots Securely sends PoolTool the number of slots you have assigned for an epoch and validates the correctness of your past epochs (deployed as service)\ninit One-time initialization adding all minted and confirmed blocks to blocklog\nmigrate One-time migration from old blocklog (cntoolsBlockCollector) to new format (post cncli)\n path Path to the old cntoolsBlockCollector blocklog folder holding json files with blocks created\n
"},{"location":"Scripts/cncli/#view-blocklog","title":"View Blocklog","text":"Best and easiest viewed in CNTools and gLiveView
but the blocklog database is a SQLite DB so if you are comfortable with SQL, the sqlite3
command can be used to query the DB.
Block status
- Leader : Scheduled to make block at this slot\n- Ideal : Expected/Ideal number of blocks assigned based on active stake (sigma)\n- Luck : Leader slots assigned vs ideal slots for this epoch\n- Adopted : Block created successfully\n- Confirmed : Block created validated to be on-chain with the certainty set in `cncli.sh` for `CONFIRM_BLOCK_CNT`\n- Missed : Scheduled at slot but no record of it in CNCLI DB and no other pool has made a block for this slot\n- Ghosted : Block created but marked as orphaned and no other pool has made a valid block for this slot -> height battle or block propagation issue\n- Stolen : Another pool has a valid block registered on-chain for the same slot\n- Invalid : Pool failed to create block, base64 encoded error message can be decoded with `echo <base64 hash> | base64 -d | jq -r`\n
CNTools Open CNTools and select [b] Blocks
to open the block viewer. Either select Epoch
and enter the epoch you want to see a detailed view for or choose Summary
to display blocks for last x epochs.
If the node was elected to create blocks in the selected epoch it could look something like this:
Summary >> BLOCKS\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nCurrent epoch: 96\n\n+--------+---------------------------+----------------------+--------------------------------------+\n| Epoch | Leader | Ideal | Luck | Adopted | Confirmed | Missed | Ghosted | Stolen | Invalid |\n+--------+---------------------------+----------------------+--------------------------------------+\n| 96 | 34 | 31.66 | 107.39% | 18 | 18 | 0 | 0 | 0 | 0 |\n| 95 | 32 | 30.57 | 104.68% | 32 | 32 | 0 | 0 | 0 | 0 |\n+--------+---------------------------+----------------------+--------------------------------------+\n\n[h] Home | [b] Block View | [i] Info | [*] Refresh\n
Epoch >> BLOCKS\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nCurrent epoch: 96\n\n+---------------------------+----------------------+--------------------------------------+\n| Leader | Ideal | Luck | Adopted | Confirmed | Missed | Ghosted | Stolen | Invalid |\n+---------------------------+----------------------+--------------------------------------+\n| 34 | 31.66 | 107.39% | 18 | 18 | 0 | 0 | 0 | 0 |\n+---------------------------+----------------------+--------------------------------------+\n\n+-----+------------+----------+---------------------+--------------------------+-------+-------------------------------------------------------------------+\n| # | Status | Block | Slot | SlotInEpoch | Scheduled At | Size | Hash |\n+-----+------------+----------+---------------------+--------------------------+-------+-------------------------------------------------------------------+\n| 1 | confirmed | 2043444 | 11142827 | 40427 | 2020-11-16 08:34:03 CET | 3 | ec216d3fb01e4a3cc3e85305145a31875d9561fa3bbcc6d0ee8297236dbb4115 |\n| 2 | confirmed | 2044321 | 11165082 | 62682 | 2020-11-16 14:44:58 CET | 3 | b75c33a5bbe49a74e4b4cc5df4474398bfb10ed39531fc65ec2acc51f89ddce5 |\n| 3 | confirmed | 2044397 | 11166970 | 64570 | 2020-11-16 15:16:26 CET | 3 | c1ea37fd72543779b6dab46e3e29e0e422784b5fd6188f828ace9eabcc87088f |\n| 4 | confirmed | 2044879 | 11178909 | 76509 | 2020-11-16 18:35:25 CET | 3 | 35a116cec80c5dc295415e4fc8e6435c562b14a5d6833027006c988706c60307 |\n| 5 | confirmed | 2046965 | 11232557 | 130157 | 2020-11-17 09:29:33 CET | 3 | d566e5a1f6a3d78811acab4ae3bdcee6aa42717364f9afecd6cac5093559f466 |\n| 6 | confirmed | 2047101 | 11235675 | 133275 | 2020-11-17 10:21:31 CET | 3 | 3a638e01f70ea1c4a660fe4e6333272e6c61b11cf84dc8a5a107b414d1e057eb |\n| 7 | confirmed | 2047221 | 11238453 | 136053 | 2020-11-17 11:07:49 CET | 3 | 843336f132961b94276603707751cdb9a1c2528b97100819ce47bc317af0a2d6 |\n| 8 | confirmed | 2048692 | 11273507 | 171107 | 2020-11-17 20:52:03 CET | 3 | 9b3eb79fe07e8ebae163870c21ba30460e689b23768d2e5f8e7118c572c4df36 |\n| 9 | confirmed | 2049058 | 11282619 | 180219 | 2020-11-17 23:23:55 CET | 3 | 643396ea9a1a2b6c66bb83bdc589fa19c8ae728d1f1181aab82e8dfe508d430a |\n| 10 | confirmed | 2049321 | 11289237 | 186837 | 2020-11-18 01:14:13 CET | 3 | d93d305a955f40b2298247d44e4bc27fe9e3d1486ef3ef3e73b235b25247ccd7 |\n| 11 | confirmed | 2049747 | 11299205 | 196805 | 2020-11-18 04:00:21 CET | 3 | 19a43deb5014b14760c3e564b41027c5ee50e0a252abddbfcac90c8f56dc0245 |\n| 12 | confirmed | 2050415 | 11316075 | 213675 | 2020-11-18 08:41:31 CET | 3 | dd2cb47653f3bfb3ccc8ffe76906e07d96f1384bafd57a872ddbab3b352403e3 |\n| 13 | confirmed | 2050505 | 11318274 | 215874 | 2020-11-18 09:18:10 CET | 3 | deb834bc42360f8d39eefc5856bb6d7cabb6b04170c842dcbe7e9efdf9dbd2e1 |\n| 14 | confirmed | 2050613 | 11320754 | 218354 | 2020-11-18 09:59:30 CET | 3 | bf094f6fde8e8c29f568a253201e4b92b078e9a1cad60706285e236a91ec95ff |\n| 15 | confirmed | 2050807 | 11325239 | 222839 | 2020-11-18 11:14:15 CET | 3 | 21f904346ba0fd2bb41afaae7d35977cb929d1d9727887f541782576fc6a62c9 |\n| 16 | confirmed | 2050997 | 11330062 | 227662 | 2020-11-18 12:34:38 CET | 3 | 109799d686fe3cad13b156a2d446a544fde2bf5d0e8f157f688f1dc30f35e912 |\n| 17 | confirmed | 2051286 | 11336791 | 234391 | 2020-11-18 14:26:47 CET | 3 | bb1beca7a1d849059110e3d7dc49ecf07b47970af2294fe73555ddfefb9561a8 |\n| 18 | confirmed | 2051734 | 11348498 | 246098 | 2020-11-18 17:41:54 CET | 3 | 87940b53c2342999c1ba4e185038cda3d8382891a16878a865f5114f540683de |\n| 19 | leader | - | 11382001 | 279601 | 2020-11-19 03:00:17 CET | - | - |\n| 20 | leader | - | 11419959 | 317559 | 2020-11-19 13:32:55 CET | - | - |\n| 21 | leader | - | 11433174 | 330774 | 2020-11-19 17:13:10 CET | - | - |\n| 22 | leader | - | 11434241 | 331841 | 2020-11-19 17:30:57 CET | - | - |\n| 23 | leader | - | 11435289 | 332889 | 2020-11-19 17:48:25 CET | - | - |\n| 24 | leader | - | 11440314 | 337914 | 2020-11-19 19:12:10 CET | - | - |\n| 25 | leader | - | 11442361 | 339961 | 2020-11-19 19:46:17 CET | - | - |\n| 26 | leader | - | 11443861 | 341461 | 2020-11-19 20:11:17 CET | - | - |\n| 27 | leader | - | 11446997 | 344597 | 2020-11-19 21:03:33 CET | - | - |\n| 28 | leader | - | 11453110 | 350710 | 2020-11-19 22:45:26 CET | - | - |\n| 29 | leader | - | 11455323 | 352923 | 2020-11-19 23:22:19 CET | - | - |\n| 30 | leader | - | 11505987 | 403587 | 2020-11-20 13:26:43 CET | - | - |\n| 31 | leader | - | 11514983 | 412583 | 2020-11-20 15:56:39 CET | - | - |\n| 32 | leader | - | 11516010 | 413610 | 2020-11-20 16:13:46 CET | - | - |\n| 33 | leader | - | 11518958 | 416558 | 2020-11-20 17:02:54 CET | - | - |\n| 34 | leader | - | 11533254 | 430854 | 2020-11-20 21:01:10 CET | - | - |\n+-----+------------+----------+---------------------+--------------------------+-------+-------------------------------------------------------------------+\n
gLiveView Currently shows a block summary for current epoch. For full block details use CNTools for now. Invalid, missing, ghosted and stolen blocks only shown in case of a non-zero value.
\u2502--------------------------------------------------------------\u2502\n\u2502 BLOCKS Leader | Ideal | Luck | Adopted | Confirmed \u2502\n\u2502 24 27.42 87.53% 1 1 \u2502\n\u2502 08:07:57 until leader XXXXXXXXX.....................\u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n
"},{"location":"Scripts/cntools-changelog/","title":"Changelog","text":"All notable changes to this tool will be documented in this file.
Whenever you're updating between versions where format/hash of keys have changed , or you're changing networks - it is recommended to Backup your Wallet and Pool folders before you proceed with launching cntools on a fresh network.
The format is based on Keep a Changelog, and this adheres to Semantic Versioning.
"},{"location":"Scripts/cntools-changelog/#1100-2023-07-05","title":"[11.0.0] - 2023-07-05","text":""},{"location":"Scripts/cntools-changelog/#changed","title":"Changed","text":"test_koios
call from cntools.library to cntools.shdialog
by default, it is an optional component - and no longer installed by default.--whole-utxo
flag, as it returns all address and will not accept --address
--whole-utxo
flag when query UTxO, as required by cardano-cli 1.28, to keep behaviour same as before.Advanced
Though mostly unchanged in the user interface, this is a major update with most of the code re-written/touched in the back-end. Only the most noticeable changes added to changelog.
"},{"location":"Scripts/cntools-changelog/#added_10","title":"Added","text":"--cold-verification-key-file
instead of --verification-key-file
This is a major release with a lot of changes. It is highly recommended that you familiarise yourself with the usage for Hybrid or Online v/s Offline mode on a testnet environment before doing it on production. Please visit https://cardano-community.github.io/guild-operators/upgrade for details.
"},{"location":"Scripts/cntools-changelog/#added_13","title":"Added","text":"cardano-address
and bech32
in yout $PATH to use this feature (available if you rebuild cardano-node
using updated cabal-build-all.sh
), reusing guide from @ilap.srm
) when available when deleting files.,
) in user input for sending ADA and pledge/cost at pool registration to make it easier to count the zeroscardano-node 1.19.0
, please upgrade if you're not using this version.Pool >> Show
now moved to its own menu option This is to de-clutter and because it takes time to parse this data from ledger-statePool >> Delegators
removed.pool >> show
stake distribution showing up as always 0.prereqs.sh -t
) fix for internal update--output-format hex
when extracting pool ID in hex formatWallet >> Encrypt
as these are re-generated from keys and need to be writableFunds >> Withdraw
for base address as this is used to pay the withdraw transaction feePool >> Show
delegator rewards parsing from ledger-statemainnet_candidate
, and add second argument (g) to run prereqs against guild network[c]
to [Esc]
Wallet >> Show
2.1.1
included a change to env file and thus require a major version bump.Pool >> Show
Pool >> Show
(stake + reward)
is below pledge (single-owner only for now)Pool >> Show
Pool >> New
to Pool >> Register
.Wallet >> List
Not a registered wallet on chain
information from Wallet listingPool >> Show
Important
Familiarize yourself with the Online workflow of creating wallets and pools on the Preview/Preprod/Guild network first. You can then move on to test the Offline Workflow. The Offline workflow means that the private keys never touch the Online node. When comfortable with both the online and offline CNTools workflow, it's time to deploy what you learnt on the mainnet.
This chapter describes some common use-cases for wallet and pool creation when running CNTools in Online mode. CNTools contains much more functionality not described here.
Create WalletA wallet is needed for pledge and to pay for pool registration fee.
[w] Wallet
and you will be presented with the following menu: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> WALLET\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Wallet Management\n\n ) New - create a new wallet\n ) Import - import a Daedalus/Yoroi 24/25 mnemonic or Ledger/Trezor HW wallet\n ) Register - register a wallet on chain\n ) De-Register - De-Register (retire) a registered wallet\n ) List - list all available wallets in a compact view\n ) Show - show detailed view of a specific wallet\n ) Remove - remove a wallet\n ) Decrypt - remove write protection and decrypt wallet\n ) Encrypt - encrypt wallet keys and make all files immutable\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Select Wallet Operation\n\n [n] New\n [i] Import\n [r] Register\n [z] De-Register\n [l] List\n [s] Show\n [x] Remove\n [d] Decrypt\n [e] Encrypt\n [h] Home\n
[n] New
to create a new wallet. [i] Import
can also be used to import a Daedalus/Yoroi based 15 or 24 word wallet seed ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> WALLET >> NEW\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nName of new wallet: Test\n\nNew Wallet : Test\nAddress : addr_test1qpq5qjr774cyc6kxcwp060k4t4hwp42q43v35lmcg3gcycu5uwdwld5yr8m8fgn7su955zf5qahtrgljqfjfa4nr8jfsj4alxk\nEnterprise Address : addr_test1vpq5qjr774cyc6kxcwp060k4t4hwp42q43v35lmcg3gcyccuxhdka\n\nYou can now send and receive Ada using the above addresses.\nNote that Enterprise Address will not take part in staking.\nWallet will be automatically registered on chain if you\nchoose to delegate or pledge wallet when registering a stake pool.\n
The Import
feature of CNTools is originally based on this guide from Ilap.
If you would like to use Import
function to import a Daedalus/Yoroi based 15 or 24 word wallet seed, please ensure that cardano-address
and bech32
bineries are available in your $PATH
environment variable:
bech32 --version\n1.1.0\n\ncardano-address --version\n3.5.0\n
If the version is not as per above, please run the latest guild-deploy.sh
from here and rebuild cardano-node
as instructed here.
To import a Daedalus/Yoroi wallet to CNTools, open CNTools and select the [w] Wallet
option, and then select the [i] Import
, the following menu will appear:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> WALLET >> IMPORT\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Wallet Import\n\n ) Mnemonic - Daedalus/Yoroi 24 or 25 word mnemonic\n ) HW Wallet - Ledger/Trezor hardware wallet\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Select Wallet operation\n\n [m] Mnemonic\n [w] HW Wallet\n [h] Home\n
Note
You can import Hardware wallet using [w] HW Wallet
above, but please note that before you are able to use hardware wallet in CNTools, you need to ensure you can detect your hardware device at OS level using cardano-hw-cli
Select the wallet you want to import, for Daedalus / Yoroi wallets select [m] Mnemonic
:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> WALLET >> IMPORT >> MNEMONIC\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nName of imported wallet: TEST\n\n24 or 15 word mnemonic(space separated):\n
Give your wallet a name (in this case 'TEST'), and enter your mnemonic phrase. Please ensure that you **READ* through the complete notes presented by CNTools before proceeding. Create Pool Create the necessary pool keys.
[p] Pool
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> POOL\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Pool Management\n\n ) New - create a new pool\n ) Register - register created pool on chain using a stake wallet (pledge wallet)\n ) Modify - change pool parameters and register updated pool values on chain\n ) Retire - de-register stake pool from chain in specified epoch\n ) List - a compact list view of available local pools\n ) Show - detailed view of specified pool\n ) Rotate - rotate pool KES keys\n ) Decrypt - remove write protection and decrypt pool\n ) Encrypt - encrypt pool cold keys and make all files immutable\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Select Pool Operation\n\n [n] New\n [r] Register\n [m] Modify\n [x] Retire\n [l] List\n [s] Show\n [o] Rotate\n [d] Decrypt\n [e] Encrypt\n [h] Home\n
[n] New
to create a new pool ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> POOL >> NEW\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nPool Name: TEST\n\nPool: TEST\nID (hex) : 8d5a3510f18ce241115da38a1b2419ed82d308599c16e98caea1b4c0\nID (bech32) : pool134dr2y833n3yzy2a5w9pkfqeakpdxzzenstwnr9w5x6vqtnclue\n
Register the pool on-chain.
[p] Pool
[r] Register
Make sure you set your pledge low enough to insure your funds in your wallet will cover pledge plus pool registration fees.
Test
in our case. As this is a newly created wallet, you will be prompted to continue with wallet registration. When complete and if successful, both wallet and pool will be registered on-chain.It will look something like this:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> POOL >> REGISTER\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nOnline mode - The default mode to use if all keys are available\n\nHybrid mode - 1) Go through the steps to build a transaction file\n 2) Copy the built tx file to an offline node\n 3) Sign it using 'Sign Tx' with keys on offline node\n (CNTools started in offline mode '-o' without node connection)\n 4) Copy the signed tx file back to the online node and submit using 'Submit Tx'\n\nSelected value: [o] Online\n\n# Select pool\nSelected pool: TEST\n\n# Pool Parameters\npress enter to use default value\n\nPledge (in Ada, default: 50,000):\nMargin (in %, default: 7.5):\nCost (in Ada, minimum: 340, default: 340):\n\n# Pool Metadata\n\nEnter Pool's JSON URL to host metadata file - URL length should be less than 64 chars (default: https://foo.bat/poolmeta.json):\n\nEnter Pool's Name (default: TEST):\nEnter Pool's Ticker , should be between 3-5 characters (default: TEST):\nEnter Pool's Description (default: No Description):\nEnter Pool's Homepage (default: https://foo.com):\n\nOptionally set an extended metadata URL?\nSelected value: [n] No\n{\n \"name\": \"TEST\",\n \"ticker\": \"TEST\",\n \"description\": \"No Description\",\n \"homepage\": \"https://foo.com\",\n \"nonce\": \"1613146429\"\n}\n\nPlease host file /opt/cardano/guild/priv/pool/TEST/poolmeta.json as-is at https://foo.bat/poolmeta.json\n\n# Pool Relay Registration\nSelected value: [d] A or AAAA DNS record (single)\nEnter relays's DNS record, only A or AAAA DNS records: relay.foo.com\nEnter relays's port: 6000\nAdd more relay entries?\nSelected value: [n] No\n\n# Select main owner/pledge wallet (normal CLI wallet)\nSelected wallet: Test (100,000.000000 Ada)\nWallet Test3 not registered on chain\n\nWaiting for new block to be created (timeout = 600 slots, 600s)\nINFO: press any key to cancel and return (won't stop transaction)\n\nOwner #1 : Test added!\n\nRegister a multi-owner pool (you need to have stake.vkey of any additional owner in a seperate wallet folder under $CNODE_HOME/priv/wallet)?\nSelected value: [n] No\n\nUse a separate rewards wallet from main owner?\nSelected value: [n] No\n\nWaiting for new block to be created (timeout = 600 slots, 600s)\nINFO: press any key to cancel and return (won't stop transaction)\n\nPool TEST successfully registered!\nOwner #1 : Test\nReward Wallet : Test\nPledge : 50,000 Ada\nMargin : 7.5 %\nCost : 340 Ada\n\nUncomment and set value for POOL_NAME in ./env with 'TEST'\n\nINFO: Total balance in 1 owner/pledge wallet(s) are: 99,497.996518 Ada\n
POOL_NAME
in ./env
with 'TEST' (in our case, the POOL_NAME
is TEST
). The cnode.sh
script will automatically detect whether the files required to run as a block producing node are present in the $CNODE_HOME/priv/pool/<POOL_NAME>
directory.The node runs with an operational certificate, generated using the KES hot key. For security reasons, the protocol asks to re-generate (or rotate) your KES key once reaching expiry. On mainnet, this expiry is in 62 cycles of 18 hours (thus, to ask for rotation quarterly), after which your node will not be able to forge valid blocks unless rotated. To be able to rotate KES keys, your cold keys files (cold.skey
, cold.vkey
and cold.counter
) need to be present on the machine where you run CNTools to rotate your KES key.
To Rotate KES keys and generate the operational certificate - op.cert
.
From the main menu select [p] Pool
[o] Rotate
The output should look like:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> POOL >> ROTATE KES\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nSelect pool to rotate KES keys on\nSelected pool: TEST\n\nPool KES keys successfully updated\nNew KES start period : 240\nKES keys will expire : 302 - 2021-09-04 11:24:31 UTC\n\nRestart your pool node for changes to take effect\n\npress any key to return to home menu\n
cardano-node
. If deployed as a systemd
service as shown here, you can run sudo systemctl restart cnode
.You can use gLiveView - the output at the top should say > Cardano Node - (Core - Guild)
.
Alternatively, you can check the node logs in $CNODE_HOME/logs/
to see whether the node is performing leadership checks (TraceStartLeadershipCheck
, TraceNodeIsNotLeader
, etc.)
Important
Koios CNTools is like a swiss army knife for pool operators to simplify typical operations regarding their wallet keys and pool management. Please note that this tool only aims to simplify usual tasks for its users, but it should NOT act as an excuse to skip understanding how to manually work through things or basics of Linux operations. The skills highlighted on the home page are paramount for a stake pool operator, and so is the understanding of configuration files and network. Please ensure you've read and understood the disclaimers before proceeding.
Visit the Changelog section to see progress and current release.
"},{"location":"Scripts/cntools/#overview","title":"Overview","text":"The tool consist of three files.
cntools.sh
- the main script to launch cntools.cntools.library
- internal script with helper functions.In addition to the above files, there is also a dependency on the common env
file. CNTools connects to your node through the configuration in the env
file located in the same directory as the script. Customize env
and cntools.sh
files to your needs.
Additionally, CNTools can integrate and enable optional functionalities based on external components:
cncli.sh
is a companion script with optional functionalities to run on the core node (block producer) such as monitoring created blocks, calculating leader schedules and block validation.logMonitor.sh
is another companion script meant to be run together with the cncli.sh
script to give a more complete picture.See CNCLI and Log Monitor sections for more details.
Koios CNTools can operate in following modes:
-a
runtime argument, this launches CNTools exposing a new Advanced
menu, which allows users to manage (create/mint/burn) new assets.-o
runtime argument, this launches CNTools with limited set of features. This mode does not require access to cardano-node. It is mainly used to create Wallet/Pool and access Transaction >> Sign
to sign an offline transaction file created in Hybrid mode.The update functionality is provided from within CNTools. In case of breaking changes, please follow the prompts post-upgrade. If stuck, it's always best to re-run the latest guild-deploy.sh
before proceeding.
If you have not updated in a while, it is possible that you might come from a release with breaking changes. If so, please be sure to check out the upgrade instructions.
"},{"location":"Scripts/cntools/#navigation","title":"Navigation","text":"The scripts menu supports both arrow key navigation and shortcut key selection. The character within the square brackets is the shortcut to press for quick navigation. For other selections like wallet and pool menu that don't contain shortcuts, there is a third way to navigate. Key pressed is compared to the first character of the menu option and if there is a match the selection jumps to this location. A handy way to quickly navigate a large menu.
"},{"location":"Scripts/cntools/#hardware-wallet","title":"Hardware Wallet","text":"CNTools includes hardware wallet support since version 7.0.0
through Vacuumlabs cardano-hw-cli
application. Initialize and update firmware/app on the device to the latest version before usage following the manufacturer instructions.
To enable hardware support run guild-deploy.sh -s w
. This downloads and installs Vacuumlabs cardano-hw-cli
including udev
configuration. When a new version of Vacuumlabs cardano-hw-cli
is released, run guild-deploy.sh -s w
again to update. For additional runtime options, run guild-deploy.sh -h
.
Trezor Bridge
for your system before trying to use your Trezor device in CNTools. You can find the latest version of the bridge at https://wallet.trezor.io/#/bridgeCNTools can be run in online and offline mode. At a very high level, for working with offline devices, remember that you need to use CNTools in an online node to generate a staging transaction for the desired type of transaction, and then move the staging transaction to an offline node to sign (authorize) using the signing keys on your offline node - and then bring back the signed transaction to the online node for submission to the chain.
For the offline workflow, all the wallet and pool keys should be kept on the offline node. The backup function in CNTools has an option to create a backup without private keys (sensitive signing keys) to be transferred to online node. All other files are included in the backup to be transferred to the online node.
Keys excluded from backup when created without private keys: Wallet - payment.skey
, stake.skey
Pool - cold.skey
Note that setting up an offline server requires good SysOps background (you need to be aware of how to set up your server with offline mirror repository, how to transfer files across and be fairly familiar with the disk layout presented in the documentation). The guild-deploy.sh
in its current state is not expected to run on an offline machine. Essentially, you simply need the cardano-cli
, bech32
, cardano-address
binaries in your $PATH
, OS level dependency packages [jq
, coreutils
, pkgconfig
, gcc-c++
and bc
], and perhaps a copy from your online cnode
directory (to ensure you have the right genesis
/config
files on your offline server). We strongly recommend you to familiarise yourself with the workflow on the preview / preprod / guild networks first, before attempting on mainnet.
Example workflow for creating a wallet and pool:
sequenceDiagram Note over Offline: Create/Import a wallet Note over Offline: Create a new pool Note over Offline: Rotate KES keys to generate op.cert Note over Offline: Create a backup w/o private keys Offline->>Online: Transfer backup to online node Note over Online: Fund the wallet base address with enough Ada Note over Online: Register wallet using ' Wallet \u00bb Register ' in hybrid mode Online->>Offline: Transfer built tx file back to offline node Note over Offline: Use ' Transaction >> Sign ' with payment.skey from wallet to sign transaction Offline->>Online: Transfer signed tx back to online node Note over Online: Use ' Transaction >> Submit ' to send signed transaction to blockchain Note over Online: Register pool in hybrid mode loop Offline-->Online: Repeat steps to sign and submit built pool registration transaction end Note over Online: Verify that pool was successfully registered with ' Pool \u00bb Show ' Online modeTo start CNTools in Online (advanced) Mode, execute the script from the $CNODE_HOME/scripts/
directory:
cd $CNODE_HOME/scripts\n./cntools.sh -a\n
You should get a screen that looks something like this:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> Koios CNTools vX.X.X - Guild - CONNECTED <<\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Main Menu Telegram Announcement / Support channel: t.me/CardanoKoios/9759\n\n ) Wallet - create, show, remove and protect wallets\n ) Funds - send, withdraw and delegate\n ) Pool - pool creation and management\n ) Transaction - Sign and Submit a cold transaction (hybrid/offline mode)\n ) Blocks - show core node leader schedule & block production statistics\n ) Backup - backup & restore of wallet/pool/config\n ) Advanced - Developer and advanced features: metadata, multi-assets, ...\n ) Refresh - reload home screen content\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Epoch 276 - 3d 19:08:27 until next\n What would you like to do? Node Sync: 12 :)\n\n [w] Wallet\n [f] Funds\n [p] Pool\n [t] Transaction\n [b] Blocks\n [u] Update\n [z] Backup & Restore\n [a] Advanced\n [r] Refresh\n [q] Quit\n
Offline mode To start CNTools in Offline Mode, execute the script from the $CNODE_HOME/scripts/
directory using the -o
flag:
cd $CNODE_HOME/scripts\n./cntools.sh -o\n
The main menu header should let you know that node is started in offline mode:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> Koios CNTools vX.X.X - Guild - OFFLINE <<\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Main Menu Telegram Announcement / Support channel: t.me/CardanoKoios/9759\n\n ) Wallet - create, show, remove and protect wallets\n ) Funds - send, withdraw and delegate\n ) Pool - pool creation and management\n ) Transaction - Sign and Submit a cold transaction (hybrid/offline mode)\n\n ) Backup - backup & restore of wallet/pool/config\n\n ) Refresh - reload home screen content\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Epoch 276 - 3d 19:03:46 until next\n What would you like to do?\n\n [w] Wallet\n [f] Funds\n [p] Pool\n [t] Transaction\n [z] Backup & Restore\n [r] Refresh\n [q] Quit\n
"},{"location":"Scripts/env/","title":"Common env","text":"A common environment file called env
is sourced by most scripts in the Guild Operators repository. This file holds common variables and functions needed by other scripts. There are several benefits to this, not having to specify duplicate settings and being able to reuse functions decreasing the risk of misconfiguration and inconsistency.
env
file is downloaded together with the rest of the scripts when Pre-Requisites if followed and located in the $CNODE_HOME/scripts/
directory. The file is also automatically downloaded/updated by some of the individual scripts if missing, like cntools.sh
, gLiveView.sh
and topologyUpdater.sh
. All custom changes in User Variables section are untouched on updates unless a forced overwrite is selected when running guild-deploy.sh
.
Most variables can be left commented to use the automatically detected or default value. But there are some that need to be set as explained below.
CNODE_PORT
- This is the most important variable and needs to be set. Used when launching the node through cnode.sh
and to identify the correct process of the node.CNODE_HOME
- The root directory of the Cardano node holding all the files needed. Can be left commented if guild-deploy.sh
has been run as this variable is then exported and added as a system environment variable.POOL_NAME
- If the node is to be started as a block producer by cnode.sh
this variable needs to be uncommented and set. This is the name given to the pool in CNTools (not ticker), i.e. the pool directory name under $CNODE_HOME/priv/pool/<POOL_NAME>
Take your time and look through the different variables and their explanations and decide if you need/want to change the default setting. For a default deployment using guild-deploy.sh
, the CNODE_PORT
(all installs) and POOL_NAME
(only block producer) should be the only variables needed to be set.
######################################\n# User Variables - Change as desired #\n# Leave as is if unsure #\n######################################\n\n#CCLI=\"${HOME}/.local/bin/cardano-cli\" # Override automatic detection of path to cardano-cli executable\n#CNCLI=\"${HOME}/.local/bin/cncli\" # Override automatic detection of path to cncli executable (https://github.com/AndrewWestberg/cncli)\n#CNODE_HOME=\"/opt/cardano/cnode\" # Override default CNODE_HOME path (defaults to /opt/cardano/cnode)\nCNODE_PORT=6000 # Set node port\n#CONFIG=\"${CNODE_HOME}/files/config.json\" # Override automatic detection of node config path\n#SOCKET=\"${CNODE_HOME}/sockets/node0.socket\" # Override automatic detection of path to socket\n#TOPOLOGY=\"${CNODE_HOME}/files/topology.json\" # Override default topology.json path\n#LOG_DIR=\"${CNODE_HOME}/logs\" # Folder where your logs will be sent to (must pre-exist)\n#DB_DIR=\"${CNODE_HOME}/db\" # Folder to store the cardano-node blockchain db\n#UPDATE_CHECK=\"Y\" # Check for updates to scripts, it will still be prompted before proceeding (Y|N).\n#TMP_DIR=\"/tmp/cnode\" # Folder to hold temporary files in the various scripts, each script might create additional subfolders\n#EKG_HOST=127.0.0.1 # Set node EKG host IP\n#EKG_PORT=12788 # Override automatic detection of node EKG port\n#PROM_HOST=127.0.0.1 # Set node Prometheus host IP\n#PROM_PORT=12798 # Override automatic detection of node Prometheus port\n#EKG_TIMEOUT=3 # Maximum time in seconds that you allow EKG request to take before aborting (node metrics)\n#CURL_TIMEOUT=10 # Maximum time in seconds that you allow curl file download to take before aborting (GitHub update process)\n#BLOCKLOG_DIR=\"${CNODE_HOME}/guild-db/blocklog\" # Override default directory used to store block data for core node\n#BLOCKLOG_TZ=\"UTC\" # TimeZone to use when displaying blocklog - https://en.wikipedia.org/wiki/List_of_tz_database_time_zones\n#SHELLEY_TRANS_EPOCH=208 # Override automatic detection of shelley epoch start, e.g 208 for mainnet\n#TG_BOT_TOKEN=\"\" # Uncomment and set to enable telegramSend function. To create your own BOT-token and Chat-Id follow guide at:\n#TG_CHAT_ID=\"\" # https://cardano-community.github.io/guild-operators/Scripts/sendalerts\n#USE_EKG=\"N\" # Use EKG metrics from the node instead of Promethus. Promethus metrics(default) should yield slightly better performance\n#TIMEOUT_LEDGER_STATE=300 # Timeout in seconds for querying and dumping ledger-state\n#IP_VERSION=4 # The IP version to use for push and fetch, valid options: 4 | 6 | mix (Default: 4)\n\n#WALLET_FOLDER=\"${CNODE_HOME}/priv/wallet\" # Root folder for Wallets\n#POOL_FOLDER=\"${CNODE_HOME}/priv/pool\" # Root folder for Pools\n# Each wallet and pool has a friendly name and subfolder containing all related keys, certificates, ...\n#POOL_NAME=\"\" # Set the pool's name to run node as a core node (the name, NOT the ticker, ie folder name)\n\n#WALLET_PAY_VK_FILENAME=\"payment.vkey\" # Standardized names for all wallet related files\n#WALLET_PAY_SK_FILENAME=\"payment.skey\"\n#WALLET_HW_PAY_SK_FILENAME=\"payment.hwsfile\"\n#WALLET_PAY_ADDR_FILENAME=\"payment.addr\"\n#WALLET_BASE_ADDR_FILENAME=\"base.addr\"\n#WALLET_STAKE_VK_FILENAME=\"stake.vkey\"\n#WALLET_STAKE_SK_FILENAME=\"stake.skey\"\n#WALLET_HW_STAKE_SK_FILENAME=\"stake.hwsfile\"\n#WALLET_STAKE_ADDR_FILENAME=\"reward.addr\"\n#WALLET_STAKE_CERT_FILENAME=\"stake.cert\"\n#WALLET_STAKE_DEREG_FILENAME=\"stake.dereg\"\n#WALLET_DELEGCERT_FILENAME=\"delegation.cert\"\n\n#POOL_ID_FILENAME=\"pool.id\" # Standardized names for all pool related files\n#POOL_HOTKEY_VK_FILENAME=\"hot.vkey\"\n#POOL_HOTKEY_SK_FILENAME=\"hot.skey\"\n#POOL_COLDKEY_VK_FILENAME=\"cold.vkey\"\n#POOL_COLDKEY_SK_FILENAME=\"cold.skey\"\n#POOL_OPCERT_COUNTER_FILENAME=\"cold.counter\"\n#POOL_OPCERT_FILENAME=\"op.cert\"\n#POOL_VRF_VK_FILENAME=\"vrf.vkey\"\n#POOL_VRF_SK_FILENAME=\"vrf.skey\"\n#POOL_CONFIG_FILENAME=\"pool.config\"\n#POOL_REGCERT_FILENAME=\"pool.cert\"\n#POOL_CURRENT_KES_START=\"kes.start\"\n#POOL_DEREGCERT_FILENAME=\"pool.dereg\"\n\n#ASSET_FOLDER=\"${CNODE_HOME}/priv/asset\" # Root folder for Multi-Assets containing minted assets and subfolders for Policy IDs\n#ASSET_POLICY_VK_FILENAME=\"policy.vkey\" # Standardized names for all multi-asset related files\n#ASSET_POLICY_SK_FILENAME=\"policy.skey\"\n#ASSET_POLICY_SCRIPT_FILENAME=\"policy.script\" # File extension '.script' mandatory\n#ASSET_POLICY_ID_FILENAME=\"policy.id\"\n
"},{"location":"Scripts/gliveview/","title":"gLiveView","text":"Reminder !!
Ensure the Pre-Requisites are in place before you proceed.
Koios gLiveView is a local monitoring tool to use in addition to remote monitoring tools like Prometheus/Grafana, Zabbix or IOG's RTView. This is especially useful when moving to a systemd deployment - if you haven't done so already - as it offers an intuitive UI to monitor the node status.
The tool is independent from other files and can run as a standalone utility that can be stopped/started without affecting the status of cardano-node
.
If you've used guild-deploy.sh, you can skip this part, as this is already set up for you. The tool relies on the common env
configuration file. To get current epoch blocks, the logMonitor.sh script is needed (and can be combined with CNCLI). This is optional and Koios gLiveView will function without it.
Note
For those who follow the folder structure in this repo and do not wish to run guild-deploy.sh
, you can run the below in $CNODE_HOME/scripts
folder
To download the script:
curl -s -o gLiveView.sh https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/gLiveView.sh\ncurl -s -o env https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/env\nchmod 755 gLiveView.sh\n
"},{"location":"Scripts/gliveview/#configuration-startup","title":"Configuration & Startup","text":"For most setups, it's enough to set CNODE_PORT
in the env
file. The rest of the variables should automatically be detected. If required, modify User Variables in env
and gLiveView.sh
to suit your environment (if folder structure you use is different). This should lead you to a stage where you can now start running ./gLiveView.sh
in the folder you downloaded the script (the default location would be $CNODE_HOME/scripts
). Note that the script is smart enough to automatically detect when you're running as a Core or Relay and will show fields accordingly.
The tool can be run in legacy mode with only standard ASCII characters for terminals with trouble displaying the box-drawing characters. Run ./gLiveView.sh -h
to show available command-line parameters or permanently set it directly in script.
A sample output from both core and relay together with peer analysis:
Core Relay Peer Analysis "},{"location":"Scripts/gliveview/#upper-main-section","title":"Upper main section","text":"Displays live metrics from cardano-node gathered through the nodes EKG/Prometheus(env setting) endpoint.
activeSlotsCoeff
). A slot on MainNet happens every 1 second(slotLength
), thus the max chain density can be calculated as slotLength/activeSlotsCoeff = 5%
. Normally, the value should fluctuate around this value. starting|sync xx.x%
or if close to reference tip, the tip difference Tip (ref) - Tip (node)
to see how far of the tip (diff value) the node is. With current parameters a slot diff up to 40 from reference tip is considered good but it should usually stay below 30. It's perfectly normal to see big differences in slots between blocks. It's the built in randomness at play. To see if a node is really healthy and staying on tip you would need to compare the tip between multiple nodes. Cold
peers indicate the number of inactive but known peers to the node.Warm
peers tell how many established connections the node has.Hot
peers how many established connections are actually active.Bi-Dir
(bidirectional) and Uni-Dir
(unidirectional) indicate how the handshake protocol negotiated the connection. The connection between p2p nodes will always be bidirectional, but it will be unidirectional between p2p nodes and non-p2p nodes. Duplex
shows the connections that are actually used in both directions, only bidirectional connections have this potential.If the node is run as a core, identified by the 'forge-about-to-lead' parameter, a second core section is displayed.
Missed slot checks - A value that show if the node have missed slots for attempting leadership checks (as absolute value and percentage since node startup). !!! info \"Missed Slot Leadership Check\"
Note that while this counter should ideally be close to zero, you would often see a higher value if the node is busy (e.g. paused for garbage collection or busy with reward calculations). A consistently high percentage of missed slots would need further investigation (assistance for troubleshooting can be seeked here ), as in extremely remote cases - it can overlap with a slot that your node could be a leader for.
Blocks - If CNCLI is activated to store blocks created in a blocklog DB, data from this blocklog is displayed. See linked CNCLI documentation for details regarding the different block metrics. If CNCLI is not deployed, block metrics displayed are taken from node metrics and show blocks created by the node since node start.
A manual peer analysis can be triggered by key press p
. A latency test will be done on incoming and outgoing connections to the node.
Note
Note that with P2P enabled, an incoming/outgoing connection can be reused for bi-directional traffic. There isnt a way to distinctly identify the P2P peer's direction yet for a given IP.
Outgoing connections(peers in topology file), ping type used is done in this order: 1. cncli - If available, this gives the most accurate measure as it checks the entire handshake process against the remote peer. 2. ss - Sends a TCP SYN package to ping the remote peer on the cardano-node
port. Should give ~100% success rate. 2. tcptraceroute - Same as ss. 3. ping - fallback method using ICMP ping against IP. Will only work if firewall of remote peer accept ICMP traffic.
For incoming connections, only ICMP ping is used as remote peer port is unknown. It's not uncommon to see many undetermined peers for incoming connections as it's a good security practice to disable ICMP in firewall.
Once the analysis is finished, it will display the RTTs (return-trip times) for the peers and group them in ranges 0-50, 50-100, 100-200, 200<. The analysis is NOT live. Press [h] Home
to go back to default view or [i] Info
to show in-script help text. Up
and Down
arrow keys is used to select incoming or outgoing detailed list of IPs and their RTT value. Left (<)
and Right (>)
arrow keys can be used to navigate the pages in the selected list.
In case you run into trouble while running the script, you might want to edit env
& gLiveView.sh
and look at User Variables section. You can override the values if the automatic detection do not provide the right information, but we would appreciate if you could also notify us by raising an issue against the GitHub repository:
gLiveView.sh
######################################\n# User Variables - Change as desired #\n######################################\n\nNODE_NAME=\"Cardano Node\" # Change your node's name prefix here, keep at or below 19 characters!\nREFRESH_RATE=2 # How often (in seconds) to refresh the view (additional time for processing and output may slow it down)\nLEGACY_MODE=false # (true|false) If enabled unicode box-drawing characters will be replaced by standard ASCII characters\nRETRIES=3 # How many attempts to connect to running Cardano node before erroring out and quitting\nPEER_LIST_CNT=6 # Number of peers to show on each in/out page in peer analysis view\nTHEME=\"dark\" # dark = suited for terminals with a dark background\n# light = suited for terminals with a bright background\nENABLE_IP_GEOLOCATION=\"Y\" # Enable IP geolocation on outgoing and incoming connections using ip-api.com\n
"},{"location":"Scripts/itnrewards/","title":"Itnrewards","text":""},{"location":"Scripts/itnrewards/#concept","title":"Concept","text":"To claim rewards earned during the Incentivized TestNet the private and public keys from ITN must be converted to Shelley stake keys. A script called itnRewards.sh
has been created to guide you through the process of converting the keys and to create a CNTools compatible wallet from were the rewards can be withdrawn.
jcli
account in ITN was ed25519_sk (not extended), you can run the itnRewards.sh
script providing the name for the CNTools wallet and ITN owner public/secret keys that were used to register your pool as below. cd $CNODE_HOME/scripts\n./itnRewards.sh MyITNWallet ~/jormu/account/priv/owner.sk ~/jormu/account/priv/owner.pk\n
FUNDS >> WITHDRAW
to move rewards to the base address of walletDisclaimer
Currently this is to protect the existing pools from the ITN who already have a delegator base against spoofing - to avoid scammers building on results of ITN from known pools. There would be a solution in the future for Mainnet nodes too - but it doesn't apply to those in its current form.
"},{"location":"Scripts/itnwitness/#concept","title":"Concept","text":"Due to the expected ticker spoofing attack for pools that were famous during ITN, some of the community members have proposed an interim solution to verify the legitimacy of a pool for delegators. You can check the high-level workflow below:
graph TB A(\"ITN Owner skey (ed25519/ed25519e) ..\") --x C([\"jcli key sign ..\"]) B(\"Haskell Pool ID (pool.id) ..\") --x C C --x D(\"Signature key, (pool.sig) ..\") E(\"ITN Owner vkey (ed25519_pk) ..\") --x F(\"Extended Metadata JSON (poolmeta_extended.json) ..\") D --x F F --x G(\"Pool Meta JSON (poolmeta.json) ..\") ;"},{"location":"Scripts/itnwitness/#steps","title":"Steps","text":"The actual implementation is pretty straightforward, we will keep it brisk - as we assume ones participating are fairly familiar with jcli
usage.
mainnet_pool.id
)owner_skey
) as per below: jcli key sign --secret-key ~/jormu/account/priv/owner.sk $CNODE_HOME/priv/pool/TEST/pool.id --output mainnet_pool.sig\ncat mainnet_pool.sig\n# ed25519_sig1sn32v3z...d72rg7rc6gs\n
{\n\"itn\": {\n\"owner\": \"ed25519_pk1...\",\n\"witness\": \"ed25519_sig1...\"\n}\n}\n
If the process is approved to appear for wallets, we may consider providing easier alternatives. If any queries about the process, or any additions please create a git issue/PR against guild repository - to capture common queries and update instructions/help text where appropriate.
"},{"location":"Scripts/itnwitness/#sample-output-of-json-files-generated","title":"Sample output of JSON files generated","text":"Metadata JSON used for registering pool (one that will be hosted URL used to define pool, eg: https://hosting.site/poolmeta.json)
{\n\"name\":\"Test\",\n\"ticker\":\"TEST\",\n\"description\":\"For demo purposes only\",\n\"homepage\":\"https://hosting.site\",\n\"nonce\":\"1595816423\",\n\"extended\":\"https://hosting.site/poolmeta_extended.json\"\n}\n
Extended Metadata JSON used for hosting additional metadata (hosted at URL referred in extended
field above, thus - eg : https://hosting.site/poolmeta_extended.json)
{\n\"itn\": {\n\"owner\": \"ed25519_pk1...\",\n\"witness\": \"ed25519_sig1...\"\n}\n}\n
"},{"location":"Scripts/logmonitor/","title":"Log Monitor","text":"Reminder !!
Ensure the Pre-Requisites are in place before you proceed.
logMonitor.sh
is a general purpose JSON log monitoring script for traces created by cardano-node
. Currently, it looks for traces related to leader slots and block creation but other uses could be added in the future.
For the core node (block producer) the logMonitor.sh
script can be run to monitor the JSON log file created by cardano-node
for traces related to leader slots and block creation.
For optimal coverage, it's best run together with CNCLI scripts as they provide different functionalities. Together, they create a complete picture of blocks assigned, created, validated or invalidated due to node issues.
"},{"location":"Scripts/logmonitor/#installation","title":"Installation","text":"The script is best run as a background process. This can be accomplished in many ways but the preferred method is to run it as a systemd service. A terminal multiplexer like tmux or screen could also be used but not covered here.
Use the deploy-as-systemd.sh
script to create a systemd unit file (deployed together with CNCLI). Log output is handled by syslog and end up in the systems standard syslog file, normally /var/log/syslog
. journalctl -f -u cnode-logmonitor.service
can be used to check service output (follow mode). Other logging configurations are not covered here.
Best viewed in CNTools or gLiveView. See CNCLI for example output.
"},{"location":"Scripts/sendalerts/","title":"Sendalerts","text":"!> Ensure the Pre-Requisites are in place before you proceed.
This section describes the ways in which CNTools can send important messages to the operator.
"},{"location":"Scripts/sendalerts/#telegram-alerts","title":"Telegram alerts","text":"If known but unwanted errors occur on your node, or if characteristic values indicate an unusual status , CNTools can send you Telegram alert messages.
To do this, you first have to activate your own bot and link it to your own Telegram user. Here is an explanation of how this works:
Open Telegram and search for \"botfather\".
Write him your wish: /newbot
.
Define a name for your bot, such as cntools_[POOLNAME]_alerts
.
Botfather will confirm the creation of your bot by giving you the unique bot access token. Keep it safe and private.
Now send at least one direct message to your new bot.
Open this URL in your browser by using your own, just created bot access token:
https://api.telegram.org/bot<your-access-token>/getUpdates\n
result.message.chat.id
. This chat id should be a large integer number.This is all you need to enable your Telegram alerts in the scripts/env
file - uncomment and add the chat ID to the TG_CHAT_ID
user variable in the env
file:
...\nTG_CHAT_ID=\"<YOUR_TG_CHAT_ID>\"\n... \n
"},{"location":"Scripts/topologyupdater/","title":"Topology Updater","text":"Reminder !!
The topologyUpdater shell script must be executed on the relay node as a cronjob exactly every 60 minutes. After 4 consecutive requests (3 hours) the node is considered a new relay node in listed in the topology file. If the node is turned off, it's automatically delisted after 3 hours.
"},{"location":"Scripts/topologyupdater/#download","title":"Download and Configure","text":"If you have run guild-deploy.sh, this should already be available in your scripts folder and make this step unnecessary.
Before the updater can make a valid request to the central topology service, it must query the current tip/blockNo from the well-synced local node. It connects to your node through the configuration in the script as well as the common env
configuration file. Customize these files for your needs.
To download topologyUpdater.sh
manually, you can execute the commands below and test executing Topology Updater once (it's OK if first execution gives back an error):
cd $CNODE_HOME/scripts\ncurl -s -o topologyUpdater.sh https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/topologyUpdater.sh\ncurl -s -o env https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/env\nchmod 750 topologyUpdater.sh\n./topologyUpdater.sh\n
"},{"location":"Scripts/topologyupdater/#modify","title":"Examine and modify the variables within topologyUpdater.sh script","text":"Out of the box, the scripts might come with some assumptions, that may or may not be valid for your environment. One of the common changes as an SPO would be to the complete CUSTOM_PEERS section as below to include your local relays/BP nodes (described in the How do I add my own nodes section), and any additional peers you'd like to be always available at minimum. Please do take time to update the variables in User Variables section in env
& topologyUpdater.sh
:
### topologyUpdater.sh\n\n######################################\n# User Variables - Change as desired #\n######################################\n\nCNODE_HOSTNAME=\"CHANGE ME\" # (Optional) Must resolve to the IP you are requesting from\nCNODE_VALENCY=1 # (Optional) for multi-IP hostnames\nMAX_PEERS=15 # Maximum number of peers to return on successful fetch\n#CUSTOM_PEERS=\"None\" # Additional custom peers to (IP,port[,valency]) to add to your target topology.json\n# eg: \"10.0.0.1,3001|10.0.0.2,3002|relays.mydomain.com,3003,3\"\n#BATCH_AUTO_UPDATE=N # Set to Y to automatically update the script if a new version is available without user interaction\n
Any customisations you add above, will be saved across future guild-deploy.sh
executions, unless you specify the -f
flag to overwrite completely.
systemd service The script can be deployed as a background service in different ways but the recommended and easiest way if guild-deploy.sh was used, is to utilize the deploy-as-systemd.sh
script to setup and schedule the execution. This will deploy both push & fetch service files as well as timers for a scheduled 60 min node alive message and cnode restart at the user set interval (default: 24 hours) when running the deploy script.
cnode-tu-push.service
: pushes a node alive message to Topology Updater APIcnode-tu-push.timer
: schedules the push service to execute once every hourcnode-tu-fetch.service
: fetches a fresh topology file before the cnode.service
file is started/restartedcnode-tu-restart.service
: handles the restart of cardano-node
(cnode.sh
)cnode-tu-restart.timer
: schedules the cardano-node
restart service, default every 24hsystemctl list-timers
can be used to to check the push and restart service schedule.
crontab job Another way to deploy the topologyUpdater.sh
script is as a crontab
job. Add the script to be executed once per hour at a minute of your choice (eg xx:25 o'clock in the example below). The example below will handle both the fetch and push in a single call to the script once an hour. In addition to the below crontab job for topologyUpdater, it's expected that you also add a scheduled restart of the relay node to pick up a fresh topology file fetched by topologyUpdater script with relays that are alive and well.
25 * * * * /opt/cardano/cnode/scripts/topologyUpdater.sh\n
"},{"location":"Scripts/topologyupdater/#logs","title":"Logs","text":"You can check the last result of push message in logs/topologyUpdater_lastresult.json
. If deployed as systemd service, use sudo journalctl -u <service>
to check output from service.
If one of the parameters is outside the allowed ranges, invalid or missing the returned JSON will tell you what needs to be fixed.
Don't try to execute the script more often than once per hour. It's completely useless and may lead to a temporary blacklisting.
"},{"location":"Scripts/topologyupdater/#why-does-my-topology-file-only-contain-iog-peers","title":"Why does my topology file only contain IOG peers?","text":"Each subscribed node (4 consecutive requests) is allowed to fetch a subset of other nodes to prove loyalty/stability of the relay. Until reaching this point, your fetch calls will only return IOG peers combined with any custom peers added in USER VARIABLES section of topologyUpdater.sh
script
The engineers of cardano-node
network stack suggested to use around 20 peers. More peers create unnecessary and unwanted system load and delays.
In its default setting, topologyUpdater returns a list of 15 remote peers.
Note that the change in topology is only effective upon restart of your node. Make sure you account for some scheduled restarts on your relays, to help onboard newer relays onto the network (as described in the systemd section).
"},{"location":"Scripts/topologyupdater/#how-do-i-add-my-own-relaysstatic-nodes-in-addition-to-dynamic-list-generated-by-topologyupdater","title":"How do I add my own relays/static nodes in addition to dynamic list generated by topologyUpdater?","text":"Most of the Stake Pool Operators may have few preferences (own relays, close friends, etc) that they would like to add to their topology by default. This is where the CUSTOM_PEERS
variable in topologyUpdater.sh
comes in. You can add a list of peers in the format of: hostname/IP:port[:valency]
here and the output topology.json
formed will already include the custom peers that you supplied. Every custom peer is defined in the form [address]:[port]
and optional :[valency]
(if not specified, the valency defaults to 1
). Multiple custom peers are separated by |
. An example of a valid CUSTOM_PEERS
variable would be:
CUSTOM_PEERS=\"foo.bar.io,3001,2|198.175.21.197,6001|36.233.3.89,6000\n
The list above would add three custom peers with the specified addresses and ports, with the first one additionally specifying the optional valency parameter (in this case 2
)."},{"location":"Scripts/topologyupdater/#how-are-the-peers-for-my-topology-file-selected","title":"How are the peers for my topology file selected?","text":"We calculate the distance on the Earth's surface from your node's IP to all subscribed peers. We then order the peers by distance (closest first) and start by selecting one peer. We then skip some, pick the next, skip, pick, skip, pick ... until we reach the end of the list (furthest away). The number of skipped records is calculated in a way to have the desired number of peers at the end.
Every requesting node has its personal distance to all other nodes.
We assume this should result in a well-distributed and interconnected peering network.
"},{"location":"docker/build/","title":"Build","text":""},{"location":"docker/build/#intro","title":"Intro","text":"\ud83d\udca1 Docker containers are the fastest way to run a Cardano node in both \"Relay\" and \"Block-Producing\" (Pool) mode.
"},{"location":"docker/build/#how-to-build","title":"How to build","text":"docker build -t cardanocommunity/cardano-node:latest - < dockerfile_bin\n
"},{"location":"docker/build/#for-windows-users","title":"For Windows Users","text":"With Powershell on Windows, you can run docker by typing the following command:
Get-Content dockerfile_bin | docker build -t guild-operators/cardano-node:latest -\n
"},{"location":"docker/build/#see-also","title":"See also","text":"Docker Tips
Docker Official Docs
"},{"location":"docker/docker/","title":"Overview","text":"Running your own Cardano node has never been so fast and easy.
But first, a kind reminder to the security aspects of running docker containers.
"},{"location":"docker/docker/#external-resources","title":"External resources","text":"Modular docker images based on Debian.
Based on the Guild's work we decided to build the Cardano Node images in 3 stages:
prereq.sh
to prepare the development environment before compiling the node source code. -> Stage1If you prefer to build the images your own than you can check:
The dockerfiles are located in ./files/docker/
Node Ports Wallet Ports Flavor Node (6000) Wallet (8090) Debian Prometheus (12798) Prometheus (12798) EKG (12781)"},{"location":"docker/run/","title":"Run","text":""},{"location":"docker/run/#os-requirements","title":"OS Requirements","text":"docker-ce
installed - Get Docker.Note
1) --entrypoint=bash
# This option won't start the node's container but only the OS running (the node software wont actually start, you'll need to manually execute entrypoint.sh ), ready to get in (trough the command docker exec -it < container name or hash > /bin/bash
) and play/explore around with it in command line mode. 2) all guild tools env variable can be used to start a new container using custom values by using the \"-e\" option. 3) CPU and RAM and SHared Memory allocation option for the container can be used when you start the container (i.e. --shm-size or --memory or --cpus official docker resource docs)
docker run --init -dit\n--name <YourCName>\n--security-opt=no-new-privileges\n-e NETWORK=mainnet\n-v <your_custom_path>:/opt/cardano/cnode/priv\n-v <your_custom_db_path>:/opt/cardano/cnode/db\ncardanocommunity/cardano-node\n
"},{"location":"docker/run/#use-cases_1","title":"Use Cases:","text":"docker run --init -dit\n--name <YourCName>\n--security-opt=no-new-privileges\n-e NETWORK=mainnet\n-p 6000:6000\n-v <your_custom_path>:/opt/cardano/cnode/priv\n-v <your_custom_db_path>:/opt/cardano/cnode/db\ncardanocommunity/cardano-node\n
docker run --init -dit\n--name <YourCName>\n--security-opt=no-new-privileges\n-e NETWORK=mainnet\n-e CONFIG=/opt/cardano/cnode/priv/<your own configuration files>.yml\n-p 6000:6000\n-v <your_custom_path>:/opt/cardano/cnode/priv\n-v <your_custom_db_path>:/opt/cardano/cnode/db\ncardanocommunity/cardano-node\n
"},{"location":"docker/security/","title":"Security","text":""},{"location":"docker/security/#docker-security-best-practices","title":"Docker Security best practices","text":""},{"location":"docker/security/#intro","title":"Intro","text":"On the security front, Docker developers are faced with different types of security attacks such as:
Docker containers are now being exploited to covertly mine for cryptocurrency, marking a shift from ransomware to cryptocurrency malware. As with all things in security, also Docker security is a moving target \u2014 so it\u2019s helpful to have access to up-to-date information, including experience-based best practices, for securing your containerized environments.
"},{"location":"docker/security/#here-below-some-key-concepts","title":"Here below some key concepts:","text":"Use a Third-Party Security Tool Docker allows you to use containers from untrusted public repositories, which increases the need to scrutinize whether the container was created securely and whether it is free of any corrupt or malicious files. For this, use a multi-purpose security tool that gives extensive dev-to-production security controls.(keep reading below)
Manage Vulnerability It is best to have a sound vulnerability management program that has multiple checks throughout the container lifecycle. Vulnerability management should incorporate quality gates to detect access issues and weaknesses for a potential exploit from dev-to-production environments.
Monitor and Audit Container Activity It is vital to monitor the container ecosystem and detect suspicious activity. Container monitoring activities provide real-time reports that can help you react promptly to a security breach.
Enable Docker Content Trust Docker Content Trustis a new feature incorporated into Docker 1.8. It is disabled by default, but once enabled, allows you to verify the integrity, authenticity, and publication date of all Docker images from the Docker Hub Registry.
Use Docker Bench for Security You should consider Docker Bench for Security as your must-use script. Once the script is run, you will notice a lot of information regarding configuration best practices for deploying Docker containers that can be used to further secure your Docker server and containers.
Resource Utilization To reduce performance impacts and denial-of-service attacks, it is a good practice to implement limits on the system resources that the containers can consume. If, for example, a web server is compromised, it helps to limit the impact to the other processes that are running on a host.
RBAC RBAC is role-based access control. If you have multiple users accessing you enviroment, this is a must-have. It can be quite expensive to implement but portainer makes it super easy.
Guild tips:
NEVER NEVER NEVER expose Docker API publicly!!!
(disabled by default)
Keep Docker Host Up-to-date
Reverse Proxy
Docker Socket Ownership
Run Docker Containers as Root
Use Trusted Docker Images
Use Privileged Mode Carefully
(This is usually done by adding --privileged you can use --security-opt=no-new-privileges
instead)Some more general tips:
\"--cap-drop ALL\"
DOCKER_OPTS= \"--iptables=false\"
With this quick guide you will be able to run a cardano node in seconds and also have the powerfull Koios SPO scripts built-in.
"},{"location":"docker/tips/#how-to-operate-interactively-within-the-container","title":"How to operate interactively within the container","text":"Once executed the container as a deamon with attached tty you are then able to enter the container by using the flag -dit
.
While if you have a hook within the container console, use the following command (change CN
with your container name):
docker exec -it CN bash
This command will bring you within the container bash env ready to use the Koios tools.
"},{"location":"docker/tips/#docker-flags-explained","title":"Docker flags explained","text":"\"docker build\" options explained:\n -t : option is to \"tag\" the image you can name the image as you prefer as long as you maintain the references between dockerfiles.\n\n\"docker run\" options explained:\n -d : for detach the container\n -i : interactive enabled -t : terminal session enabled\n -e : set an Env Variable\n -p : set exposed ports (by default if not specified the ports will be reachable only internally)\n--hostname : Container's hostname\n --name : Container's name\n
"},{"location":"docker/tips/#custom-container-with-your-own-cfg","title":"Custom container with your own cfg","text":"docker run --init -itd \n-name Relay # Optional (recommended for quick access): set a name for your newly created container.\n-p 9000:6000 # Optional: to expose the internal container's port (6000) to the host <IP> port 9000\n-e NETWORK=mainnet # Mandatory: mainnet / preprod / guild-mainnet / guild\n--security-opt=no-new-privileges # Option to prevent privilege escalations\n-v <YourNetPath>:/opt/cardano/cnode/sockets # Optional: useful to share the node socket with other containers\n-v <YourCfgPath>:/opt/cardano/cnode/priv # Optional: if used has to contain all the sensitive keys needed to run a node as core\n-v <YourDBbk>:/opt/cardano/cnode/db # Optional: if not set a fresh DB will be downloaded from scratch\ncardanocommunity/cardano-node:latest # Mandatory: image to run\n
Note
To be able to use the CNTools encryption key feature you need to manually change in \"cntools.config\" ENABLE_CHATTR to \"true\" and not use the --security-opt=no-new-privileges
docker run option.
This documentation site (rather the repository itself) is created by some of the well known and experienced community members and contains instructions/information about various guild tools which simplify various stake-ops (setting up, managing and monitoring pools) for operators. Note that the guides are present to help you simplify your tasks - but as an entity responsible for creating blocks on a financial platform, we expect some basic pre-requisite skill sets - at professional level - before entering the portal:
cardano-cli
, and have worked on preview/preprod/guild networks for pool operations without use of wrapper scripts - as an education exercise;Everyone is welcome to contribute to the repository (via documentation, testing, code, videos, etc). Our aim is to work together and reduce confusion rather than hosting 100 versions of documentation - each marketing their pool in a way.
"},{"location":"#support","title":"Support","text":"The Telegram Support channel is used to announce new releases and changes to the code base. This is also the place to ask general questions regarding the documentation and scripts on this site.
To report bugs and issues with scripts and documentation please open a GitHub Issue. Feature requests are best opened as a discussion thread.
"},{"location":"#getting-started","title":"Getting Started","text":"Use the sidebar to navigate through the topics. Note that the instructions assume the folder structure as per here.
Again, Feedback/Contribution and ownership of tasks is always welcome. If you're interested in collaborating regularly, make a start - and you should be part of the guild already .
"},{"location":"basics/","title":"Basics","text":""},{"location":"basics/#architecture","title":"Architecture","text":"The architecture for various components are already described at docs.cardano.org by CF/IOHK. We will not reinvent the wheel
"},{"location":"basics/#manual-software-pre-requirements","title":"Manual Software Pre-Requirements","text":"While we do not intend to hand out step-by-step instructions, the tools are often misused as a shortcut to avoid ensuring base skillsets mentioned on home page. Some of the common gotchas that we often find SPOs to miss out on:
- It is imperative that pools operate with highly accurate system time, in order to propogate blocks to network in a timely manner and avoid penalties to own (or at times other competing) blocks. Please refer to sample guidance [here ](https://ubuntu.com/server/docs/network-ntp) for details - the precise steps may depend on your OS.\n- Ensure your Firewall rules at Network as well as OS level are updated according to the usage of your system, you'd want to whitelist the rules that you really need to open to world (eg: You might need node, SSH, and potentially secured webserver/proxy ports to be open, depending on components you run).\n- Update your SSH Configuration to prevent password-based logon.\n- Ensure that you use offline workflow, you should never require to have your offline keys on online nodes. The tools provide you backup/restore functionality to only pass online keys to online nodes.\n
"},{"location":"basics/#pre-requisites","title":"Pre-Requisites","text":"Reminder !!
You're expected to run the commands below from same session, using same working directories as indicated and using a non-root user with sudo access
. You are expected to be familiar with this as part of pre-requisite skill sets for stake pool operators.
The pre-requisites for Linux systems are automated to be executed as a single script. To download the pre-requisites scripts, execute the below:
mkdir \"$HOME/tmp\";cd \"$HOME/tmp\"\n# Install curl\n# CentOS / RedHat - sudo dnf -y install curl\n# Ubuntu / Debian - sudo apt -y install curl\ncurl -sS -o guild-deploy.sh https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/guild-deploy.sh\nchmod 755 guild-deploy.sh\n
Please familiarise with the syntax of guild-deploy.sh
before proceeding. The usage syntax can be checked using ./guild-deploy.sh -h
, sample output below:
Usage: guild-deploy.sh [-n <mainnet|preprod|guild|preview>] [-p path] [-t <name>] [-b <branch>] [-u] [-s [p][b][l][f][d][c][o][w][x]]\nSet up dependencies for building/using common tools across cardano ecosystem.\nThe script will always update dynamic content from existing scripts retaining existing user variables\n\n-n Connect to specified network instead of mainnet network (Default: connect to cardano mainnet network) eg: -n guild\n-p Parent folder path underneath which the top-level folder will be created (Default: /opt/cardano)\n-t Alternate name for top level folder - only alpha-numeric chars allowed (Default: cnode)\n-b Use alternate branch of scripts to download - only recommended for testing/development (Default: master)\n-u Skip update check for script itself\n-s Selective Install, only deploy specific components as below:\n p Install common pre-requisite OS-level Dependencies for most tools on this repo (Default: skip)\nb Install OS level dependencies for tools required while building cardano-node/cardano-db-sync components (Default: skip)\nl Build and Install libsodium fork from IO repositories (Default: skip)\nf Force overwrite entire content of scripts and config files (backups of existing ones will be created) (Default: skip)\nd Download latest (released) binaries for bech32, cardano-address, cardano-node, cardano-cli, cardano-db-sync and cardano-submit-api binaries (Default: skip)\nc Install/Upgrade CNCLI binary (Default: skip) # (1)!\no Install/Upgrade Ogmios Server binary (Default: skip)\nw Install/Upgrade Cardano Hardware CLI (Default: skip)\nx Install/Upgrade Cardano Signer binary (Default: skip)\n
glibc
, it would likely be due to the build mismatch between pre-compiled binary and your OS, which is not uncommon. You may need to compile cncli manually on your OS as per instructions here - make sure to copy the output binary to \"${HOME}/.local/bin\"
folder.This script uses opt-in election of what you'd like the script to do (as against previous version that used to try and auto-detect versions). The defaults without any arguments will only update static part of script contents for you. A typical example install to install most components but not overwrite static part of existing files for preview network would be:
./guild-deploy.sh -b master -n preview -t cnode -s pdlcowx\n. \"${HOME}/.bashrc\"\n
If instead of download, you'd want to build the components yourself, you could use:
./guild-deploy.sh -b master -n preview -t cnode -s pblcowx\n. \"${HOME}/.bashrc\"\n
Lastly, if you'd want to update your scripts but not install any additional dependencies, you may simply run:
./guild-deploy.sh -b master -n preview -t cnode\n
"},{"location":"basics/#folder-structure","title":"Folder structure","text":"Running the script above will create the folder structure as per below, for your reference. You can replace the top level folder /opt/cardano/cnode
by editing the value of CNODE_HOME
in ~/.bashrc
and $CNODE_HOME/files/env
files:
/opt/cardano/cnode # Top-Level Folder\n\u251c\u2500\u2500 ...\n\u251c\u2500\u2500 files # Config, genesis and topology files\n\u2502 \u251c\u2500\u2500 ...\n\u2502 \u251c\u2500\u2500 byron-genesis.json # Byron Genesis file referenced in config.json\n\u2502 \u251c\u2500\u2500 shelley-genesis.json # Genesis file referenced in config.json\n\u2502 \u251c\u2500\u2500 alonzo-genesis.json # Alonzo Genesis file referenced in config.json\n\u2502 \u251c\u2500\u2500 config.json # Config file used by cardano-node\n\u2502 \u2514\u2500\u2500 topology.json # Map of chain for cardano-node to boot from\n\u251c\u2500\u2500 db # DB Store for cardano-node\n\u251c\u2500\u2500 guild-db # DB Store for guild-specific tools and additions (eg: cncli, cardano-db-sync's schema)\n\u251c\u2500\u2500 logs # Logs for cardano-node\n\u251c\u2500\u2500 priv # Folder to store your keys (permission: 600)\n\u251c\u2500\u2500 scripts # Scripts to start and interact with cardano-node\n\u2514\u2500\u2500 sockets # Socket files created by cardano-node\n
"},{"location":"build/","title":"Overview","text":"The documentation here uses instructions from IOHK repositories as a foundation, with additional info which we can contribute to where appropriate. Note that not everyone needs to build each component. You can refer to architecture to understand and qualify which of the components built by IO you want to run.
"},{"location":"build/#components","title":"Components","text":"For most Pool Operators, simply building cardano-node should be enough. Use the below to decide whether you need other components:
graph TB A([Interact with HD Walletslocally]) B([Explore blockchainlocally]) C([Easy pool-ops andfund management]) D([Create Custom Assets]) E([Monitor node using Terminal UI]) F([Sign/verify any datausing crypto keys]) N(Node) O(Ogmios) P(gRest/Koios) Q(DBSync) R(Wallet) S(CNTools) T(Tx Submit API) U(GraphQL) V(OfflineMetadataTools) X(gLiveView) Y(cardano-signer) Z[(PostgreSQL)] N --x C --x S N --x D --x S & V N --x E --x X N --x B B --x U --x Q B --x P --x Q P --x O P --x T F ---x Y N --x A --x R Q --x ZImportant
We strongly prefer use of gRest over GraphQL components due to performance, security, simplicity, control and most importantly - consistency benefits. Please refer to official documentations if you're interested in GraphQL
or Cardano-Rest
components instead.
Note
The instructions are intentionally limited to stack/cabal** to avoid wait times/availability of nix/docker files on a rapidly developing codebase - this also helps us prevent managing multiple versions of instructions.
"},{"location":"build/#description-for-components-built-by-community","title":"Description for components built by community","text":""},{"location":"build/#cntools","title":"CNTools","text":"A swiss army knife for pool operators, primarily built by Ola, to simplify typical operations regarding their wallet keys and pool management. You can read more about it here
"},{"location":"build/#gliveview","title":"gLiveView","text":"A local node monitoring tool, primarily built by Ola, to use in addition to remote monitoring tools like Prometheus/Grafana, Zabbix or IOG's RTView. This is especially useful when moving to a systemd deployment - if you haven't done so already - as it offers an intuitive UI to monitor the node status. You can read more about it here
"},{"location":"build/#topology-updater","title":"Topology Updater","text":"A temporary node-to-node discovery solution, run by Markus, that was started initially to bridge the gap created while awaiting completion of P2P on cardano network, but has since become an important lifeline to the network health - to allow everyone to activate their relay nodes without having to postpone and wait for manual topology completion requests. You can read more about it here
"},{"location":"build/#koiosgrest","title":"Koios/gRest","text":"A full-featured local query layer node to explore blockchain data (via dbsync) using standardised pre-built queries served via API as per standard from Koios - for which user can opt to participate in elastic query layer. You can read more about build steps here and reference API endpoints here
"},{"location":"build/#ogmios","title":"Ogmios","text":"A lightweight bridge interface for cardano-node. It offers a WebSockets API that enables local clients to speak Ouroboros' mini-protocols via JSON/RPC. You can read more about it here
"},{"location":"build/#cncli","title":"CNCLI","text":"A CLI tool written in Rust by Andrew Westberg for low-level communication with cardano-node. It is commonly used by SPOs to check their leader logs (integrates with CNTools as well as gLiveView) or to send their pool's health information to https://pooltool.io. You can read more about it here
"},{"location":"build/#cardano-signer","title":"Cardano Signer","text":"A tool written by Martin to sign/verify data (hex, text or binary) using cryptographic keys to generate data as per CIP-8 or CIP-36 standards. You can read more about it here
"},{"location":"contributors/","title":"Contributors","text":"Everyone is welcome to contribute to the guide, as well as the repository. Below is just a thank you to people who have been contributing consistently:
Adam Chris Damjan Homer Markus OCG Ola Ahlman Pal Dorogi Papacarp PegasusPool Psychomb RdLrT RedOracle SmaugPool
To start contributing, simply hit the github repository and raise Issue/Pull Request
"},{"location":"grest-meets/","title":"GRest Meeting summaries","text":"Thank you all for joining and contributing to the project
Below you can find a short summary of every GRest meeting held, both for logging purposes and for those who were not able to attend.
"},{"location":"grest-meets/#participants","title":"Participants:","text":"Participant 16Sep2021 02Sep2021 26Aug2021 19Aug2021 12Aug2021 29Jul2021 22Jul2021 15Jul2021 09Jul2021 02Jul2021 25Jun2021 Damjan Homer Markus Ola RdLrT Red Papacarp Paddy GimbaLabs 16Sep2021 02Sep2021 26Aug2021 19Aug2021 12Aug2021 29Jul2021 22Jul2021 15Jul2021 09Jul2021 02Jul2021After the initial stand-up updates from participants, we went through the entire Trello board, updating/deleting existing tickets and creating some new ones.
25Jun2021"},{"location":"grest-meets/#scheduling-running-update-queries","title":"Scheduling running update queries","text":"Solution being tested:
Pool cache table:
we will run the full query on regular intervals, ready for review for first iteration, will see about delta post tx cache query
transaction history:
need to think about how to approach inputs/outputs in the cached table (1 row per transaction with json objects for inputs/outputs or multiple rows for tx hash)
address_txs:
this endpoint should bring back list of txs, and have provision to use after and before block hash - lightweight against public schema
pool cache table:
create a trigger every 2 minutes (or similar) to run stake_distribution query
docker:
EXPLAIN (ANALYZE, BUFFERS)
Team
grest
schema)Individual
84226d33eed66be8e61d50b7e1dacebdc095cee9
on release/10.1.x
<query>.json
and sql in <query>.sql
), also remove get_
prefixnbthreads
in config, tune maxconn, switch to http mode)Ola added automatic deployment of services to the scripts last week. We added new tasks on Trello ticket, including flags for multiple networks (guild, testnet, mainnet), haproxy service dynamically creating hosts and doc updates. Overall, the script works well with some manual interaction still required at the moment.
"},{"location":"grest-meets/#supported-networks","title":"Supported Networks","text":"Just for the record here, a 16GB (or even 8GB) instance is enough to support both testnet and guild networks.
"},{"location":"grest-meets/#db-sync-versioning","title":"db-sync versioning","text":"We agreed to use the release/10.1.x
branch which is not yet released but built to include Alonzo migrations to avoid rework later. This version does require Alonzo config and hash to be in the node's config.json
. This has to be done manually and the files are available here. Once fully released, all members should rebuild the released version to ensure each instance is running the same code.
For the DNS setup ticket, we started to think about the instance names for the 2 DNS instances (orange in the graph). Submissions for names will be made in the Telegram group, and will probably make a poll once we have the entries finalised.
"},{"location":"grest-meets/#monitoring-system","title":"Monitoring System","text":"Priyank started setting up the monitoring on his instance which can then easily be switched to a separate monitoring instance. We agreed to use Prometheus / Grafana combo for data source / visualisation. We'll probably need to create some custom archiving of data to keep it long term as Prometheus stores only the last 30 days of data.
"},{"location":"grest-meets/#next-meeting","title":"Next meeting","text":"We would like to make Friday @ 07:00 UTC the standard time and keep meetings at weekly frequency. A poll will still be created for next weeks, but if there are no objections / requests for switching the time around (which we have not had so far) we can go ahead with the making Friday the standard with polls no longer required and only reminders / Google invites sent every week.
"},{"location":"grest-meets/#deployment-scripts_1","title":"Deployment scripts","text":"During the last week, work has been done on deployment scripts for all services (db-sync, gRest and haproxy) -> this is now in testing with updated instructions on trello. Everybody can put their name down on the ticket to signify when the setup is complete and note down any comments for bugs/improvements. This is the main priority at the moment as it would allow us to start transferring our setups to mainnet.
"},{"location":"grest-meets/#switch-to-mainnet","title":"Switch to Mainnet","text":"Following on from that, we created a ticket for starting to set up mainnet instances -> we can use 32GB RAM to start and increase later. While making sure everything works against the guild network is priority, people are free to start on this as well as we anticipate we are almost ready for the switch.
"},{"location":"grest-meets/#supported-networks_1","title":"Supported Networks","text":"This brings me to another discussion point which is on which networks are to be supported. After some discussion, it was agreed to keep beefy servers for mainnet, and have small independent instances for testnet maintained by those interested, while guild instance is pretty lightweight and useful to keep.
"},{"location":"grest-meets/#monitoring-system_1","title":"Monitoring System","text":"The ticket for creating a centralised monitoring system was discussed and updated. I would say it would be good to have at least a basic version of the system in place around the time we switch to mainnet. The system could eventually serve for: - analysis of instance - performances and subsequent tuning - endpoints usage - anticipation of system requirements increases - etc.
I would say that this should be an important topic of the next meeting to come up with an approach on how we will structure this system so that we can start building it in time for mainnet switch.
"},{"location":"grest-meets/#handling-ssl","title":"Handling SSL","text":"Enabling SSL was agreed to not be required by each instance, but is optional and documentation should be created for how to automate the process of renewing SSL certificates for those wishing to add it to their instance. The end user facing endpoints \"Instance Checker\" will of course be SSL-enabled.
"},{"location":"grest-meets/#next-meeting_1","title":"Next meeting","text":"We somewhat agreed to another meeting next week again at the same time, but some participants aren't 100% for availability. Friday at 07:00 UTC might be a good standard time we hold on to, but I will make a poll like last time so that we can get more info before confirming the meeting.
"},{"location":"grest-meets/#meeting-structure","title":"Meeting Structure","text":"As this was the first meeting, at the start we discussed about the meeting structure. In general, we agreed to something like listed below, but this can definitely change in the future:
1) 2-liner (60s) round the table stand-ups by everyone to sync up on what they were doing / are planning to do / mention struggles etc. This itself often sparks discussions. 2) going through the Trello board tasks with the intention of discussing and possbily assigning them to individuals / smaller groups (maybe 1-2-3 people choose to work together on a single task)
"},{"location":"grest-meets/#stand-ups","title":"Stand-ups","text":"We then proceeded to give a status of where we are individually in terms of what's been done, a summary below:
prereqs.sh
addendum can be done once artifacts are finalised (added a Trello ticket for tracking).All in all, I think we saw that there is need for these meetings as there are a lot of things to discuss and new ideas come up (like the monitoring system). We went for over an hour (~1h15min) and still didn't have enough time to go through the board, we basically only touched the DNS/haproxy part of the board. This tells me that we are in a stage where more frequent meetings are required, weekly instead of biweekly, as we are in the initial stage and it's important to build things right from the start rather than having to refactor later on. With that, the participants in general agreed to another meeting next week, but this will be confirmed in the TG chat and the times can be discussed then.
"},{"location":"sidebar/","title":"Tree","text":"The scripts on guild-operators repository have gone through quite a few changes to accomodate for the below:
prereqs.sh
with guild-deploy.sh
using minimalistic approach (i.e. anything you need to deploy is now required to be specified using command-line arguments). The old prereqs.sh
is left as-is but will no longer be maintained.prereqs.sh -t pvnode
would have created folder structure as /opt/cardano/pvnode
and replaced CNODE_HOME
references within scripts with PVNODE_HOME
. This will no longer be required. The deriving of top level folder will be done relative to scripts folder. Thus, parent of the folder containing env
file will automatically be treated as top level folder, and no longer depend on external environment variable. One may still use them for their own comfort to switch directories.CNODE_HOME
references.\"${HOME}\"/.local/bin
. Previously, we could have had binaries deployed to various locations (\"${HOME}\"/.cabal/bin
for node/CLI binaries, \"${HOME}\"/.cargo/bin
for cncli binary, \"${HOME}\"/bin
for downloaded binaries). This occured because of different compilers used different default locations for their output binariess (cargo for rust, cabal for Haskell, etc). The guild-deploy.sh/cabal-build-all.sh scripts will now provision the binaries to be made available to \"${HOME}\"/.local/bin instead. Ofcourse, as before, you can still customise the location of binaries using variables (eg: CCLI
, CNCLI
, CNODE_HOME
) in env
file.guild-deploy.sh
, giving users both the options.Some of the above required us to add breaking changes to some scripts, but hopefully the above explains the premise for those changes. To ease this one-time upgrade process for existing deployments, we have tried to come up with the guide below, feel free to edit this file to improve the documents based on your experience. Again, apologies in advance to those who do not agree with the above changes (the old code would ofcourse remain unimpacted at tag legacy-scripts
, so if you'd like to stick to old scripts , you can use -b legacy-scripts
for your tools to switch back).
Warning
Make sure you go through upgrade steps for your setup in a non-mainnet environment first!
guild-deploy.sh
(checkout new syntax with guild-deploy.sh -h
) to update all the scripts and files from the guild template. The scripts modified with user content (env
, gLiveView.sh
, topologyUpdater.sh
, cnode.sh
, etc) will be backed up before overwriting. The backed up files will be in the same folder as the original files, and will be named as ${filename}_bkp<timestamp>
. More static files (genesis files or some of the scripts themselves) will not be backed up, as they're not expected to be modified.Remember
Please add any environment-specific parameters (eg: custom top level folder, network flag, etc) to the execution command below, similar to prereqs.sh (check new syntax using guild-deploy.sh -h
)
mkdir \"$HOME/tmp\";cd \"$HOME/tmp\"\ncurl -sS -o guild-deploy.sh https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/guild-deploy.sh\nchmod 700 guild-deploy.sh\n./guild-deploy.sh -s f -b master\n
\"${HOME}\"/.local/bin
is now part of your $PATH environment variable.source \"${HOME}\"/.bashrc\necho \"${PATH}\"\n
Check and add back your customisations to config files (or simply restore from automatically created backup of your config/topology files).
Since one of the basic changes we start to recommend as part of this revamp is moving your binaries to \"${HOME}\"/.local/bin
, you would want to move the binaries below from current location:
cabal build all
script (eg: cardano-node
, cardano-cli
, bech32
, cardano-address
, cardano-submit-api
, cardano-db-sync
cardano install
(eg: cncli
)prereqs.sh
(eg: cardano-hw-cli
)You can move the binaries by using mv command (for example, if you dont have any other files in these folders, you can use the command below:
Note
Ideally, you should shutdown services (eg: cnode, cnode-dbsync, etc) prior to running the below to ensure they run from new location (you can also re-deploy them if you haven't done so in a while, eg: ./cnode.sh -d
). At the end of the guide, you can start them back up.
mv -t \"${HOME}\"/.local/bin/ \"${HOME}\"/.cabal/bin/* \"${HOME}\"/.cargo/bin/* \"${HOME}\"/bin/*\n
We've found users often confuse between $PATH variable resolution between multiple shell sessions, systemd, etc. To avoid this, edit the following files and uncomment and set the following variables to the appropriate paths as per your deployment (eg: CCLI=\"${HOME}\"/.local/bin/cardano-cli
if following above):
The above should take care of tools and services. However, you might still have duplicate binaries in your $PATH (previous artifacts, re-build using old scripts, etc) - it is best that you remove any old binary files from alternate folders. You can do so by executing the below:
whereis bech32 cardano-address cardano-cli cardano-db-sync cardano-hw-cli cardano-node cardano-submit-api cncli ogmios\n
The above might result in some lines having more than one entry (eg: you might have cardano-cli
in \"${HOME}\"/.cabal/bin
and \"${HOME}\"/.local/bin
) - for which you'd want to delete the reference(s) not in \"${HOME}\"/.local/bin
, while for other cases - you might have no values (eg: you may not use cardano-db-sync
, cncli
, ogmios
and/or cardano-hw-cli
. You need not take any actions for the binaries you do not use.
Hope the guide above helps you with the migration, but again - we could've missed some edge cases. If so, please report via chat in Koios Discussions channel only. Please DO NOT make edits to the script content based on forum/alternate guide/channels, while done with best intentions - there have been solutions put online that modify files unnecessarily instead of correcting configs and disabling updates, such actions will only cause trouble for future updates.
"},{"location":"Appendix/RecoverByronWallet/","title":"Unofficial Instructions for recovering your Byron Era funds on the new Incentivized Shelley Testnet","text":""},{"location":"Appendix/RecoverByronWallet/#1-grab-and-install-haskell","title":"1. Grab and install Haskell","text":"curl -sSL https://get.haskellstack.org/ | sh\n
"},{"location":"Appendix/RecoverByronWallet/#2-get-the-wallet","title":"2. Get the wallet","text":"note: you must build from source as of today as there are changes that just got into master you need
git clone https://github.com/input-output-hk/cardano-wallet.git\n
"},{"location":"Appendix/RecoverByronWallet/#3-go-into-the-wallet-directory","title":"3. Go into the wallet directory","text":"cd cardano-wallet\n
"},{"location":"Appendix/RecoverByronWallet/#4-build-the-wallet","title":"4. Build the wallet","text":"stack build --test --no-run-tests\n
If it fails there are a few reasons we have found: - The cardano build instructions reference a few things that may be missing. Check those. - or maybe one of these would help:"},{"location":"Appendix/RecoverByronWallet/#libssl","title":"Libssl:","text":"sudo apt install libssl-dev\n
"},{"location":"Appendix/RecoverByronWallet/#sqlite","title":"Sqlite :","text":"sudo apt-get install sqlite3 libsqlite3-dev \n
"},{"location":"Appendix/RecoverByronWallet/#gmp","title":"gmp:","text":"sudo apt-get install libgmp3-dev \n
"},{"location":"Appendix/RecoverByronWallet/#systemd-dev","title":"systemd dev:","text":"sudo apt install libsystemd-dev\n
get coffee... It takes awhile
"},{"location":"Appendix/RecoverByronWallet/#5-when-its-done-install-executables-to-your-path","title":"5. When its done, install executables to your path","text":"stack install\n
"},{"location":"Appendix/RecoverByronWallet/#6-test-to-make-sure-cardano-wallet-jormungandr-works-fine","title":"6. Test to make sure cardano-wallet-jormungandr works fine.","text":"Generate your new mnemonics you will need below. Note that this generates 15 words as opposed to your byron era mnemnomics which were only 12 words.
cardano-wallet-jormungandr mnemonic generate\n
"},{"location":"Appendix/RecoverByronWallet/#7-launch-the-wallet-as-a-service","title":"7. Launch the wallet as a service.","text":"you can either open another terminal window or use screen or something. anyway, wherever you run this next command you won't be able to use anymore for a terminal until you stop the wallet
change --node-port 3001 to wherever you have your jormungandr rest interface running. for me it was 5001.. so
change --port 3002 to wherever you want to access the wallet interface at. If you have other things running avoid those ports. for most, 3002 should be free
just to future proof these instructions. genesis should be whatever genesis you are on.
cardano-wallet-jormungandr serve --node-port 3001 --port 3002 --genesis-block-hash e03547a7effaf05021b40dd762d5c4cf944b991144f1ad507ef792ae54603197\n
"},{"location":"Appendix/RecoverByronWallet/#8-restore-your-byron-wallet","title":"8. Restore your byron wallet:","text":"--->in another window
replace foo, foo, foo with all your mnemnomics from the byron wallet you are restoring
Also, if you put your wallet on a different port than 3002, fix that too
curl -X POST -H \"Content-Type: application/json\" -d '{ \"name\": \"legacy_wallet\", \"mnemonic_sentence\": [\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\"], \"passphrase\": \"areallylongpassword\"}' http://localhost:3002/v2/byron-wallets\n
Thats going to spit out some information about a wallet it creates, you should see the value of your wallet - hopefully its not zero. And you need the wallet ID for the next step"},{"location":"Appendix/RecoverByronWallet/#9-create-your-shelley-wallet","title":"9. Create your shelley wallet:","text":"Remember all those mnemnomics you made above.. put them here instead of all the foo's.
curl -X POST -H \"Content-Type: application/json\" -d '{ \"name\": \"pool_wallet\", \"mnemonic_sentence\": [\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\"], \"passphrase\": \"areallylongpasswordagain\"}' http://localhost:3002/v2/wallets\n
Important thing to get is the wallet id from this command"},{"location":"Appendix/RecoverByronWallet/#10-migrate-your-funds","title":"10. Migrate your funds","text":"Now you are ready to migrate your wallet. replace the <old wallet id>
and <new wallet id>
with the values you got above
curl -X POST -H \"Content-Type: application/json\" -d '{\"passphrase\": \"areallylongpassword\"}' http://localhost:3002/v2/byron-wallets/<old wallet id>/migrations/<new wallet id>\n
"},{"location":"Appendix/RecoverByronWallet/#11-congratulations-your-funds-are-now-in-your-new-wallet","title":"11. Congratulations. your funds are now in your new wallet.","text":"From here we recommend you send them to a new address entirely owned and created by jcli or whatever method you have been using for the testnet process.
This technically may not be required. But a lot of us did it and we know it works for setting up pools and stuff.
send a small amount first just to make sure you are in control of the transaction and don't send your funds to la la land.
If you want to send to another address use the command below, but replace the address that you want to send it to, the amount, and your <new wallet id>
curl -X POST -H \"Content-Type: application/json\" -d '{\"payments\": [ { \"address\": \"<address to send to>\"\", \"amount\": { \"quantity\": 83333330000000, \"unit\": \"lovelace\" } } ], \"passphrase\": \"areallylongpasswordagain\"}' http://localhost:3002/v2/wallets/<new wallet id>/transactions\n
"},{"location":"Appendix/monitoring/","title":"Monitoring","text":"Ensure the Pre-Requisites are in place before you proceed.
This is an easy-to-use script to automate setting up of monitoring tools. Tasks automates the following tasks: - Installs Prometheus, Node Exporter and Grafana Servers for your respective Linux architecture. - Configure Prometheus to connect to cardano node and node exporter jobs. - Provisions the installed prometheus server to be automatically available as data source in Grafana. - Provisions two of the common grafana dashboards used to monitor cardano-node
by SkyLight and IOHK to be readily consumed from Grafana. - Deploy prometheus
,node_exporter
and grafana-server
as systemd service on Linux. - Start and enable those services.
Note that securing prometheus/grafana servers via TLS encryption and other security best practices are out of scope for this document, and its mainly aimed to help you get started with monitoring without much fuss.
!> Ensure that you've opened the firewall port for grafana server (default used in this script is 5000)
"},{"location":"Appendix/monitoring/#download-setup_monsh","title":"Download setup_mon.sh","text":"If you have run guild-deploy.sh
, you can skip this step. To download monitoring script, you can execute the commands below:
cd $CNODE_HOME/scripts\nwget https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/setup_mon.sh\nchmod 750 setup_mon.sh\n
"},{"location":"Appendix/monitoring/#customise-any-environment-variables","title":"Customise any Environment Variables","text":"The default selection may not always be usable for everyone. You can customise further environment variable settings by opening in editor (eg: vi setup_mon.sh
), and updating variables below to your liking:
#!/usr/bin/env bash\n# shellcheck disable=SC2209,SC2164\n\n######################################################################\n#### Environment Variables\n######################################################################\nCNODE_IP=127.0.0.1\nCNODE_PORT=12798\nGRAFANA_HOST=0.0.0.0\nGRAFANA_PORT=5000\nPROJ_PATH=/opt/cardano/monitoring\nPROM_HOST=127.0.0.1\nPROM_PORT=9090\nNEXP_PORT=$(( PROM_PORT + 1 ))\n````\n\n#### Set up Monitoring\n\nExecute setup_mon.sh with full path to destination folder you want to setup monitoring in. If you're following guild folder structure, you do not need to specify `-d`. Read the usage comments below before you run the actual script.\n\nNote that to deploy services as systemd, the script expect sudo access is available to the user running the script.\n\n``` bash\ncd $CNODE_HOME/scripts\n# To check Usage parameters:\n# ./setup_mon.sh -h\n#Usage: setup_mon.sh [-d directory] [-h hostname] [-p port]\n#Setup monitoring using Prometheus and Grafana for Cardano Node\n#-d directory Directory where you'd like to deploy the packages for prometheus , node exporter and grafana\n#-i IP/hostname IPv4 address or a FQDN/DNS name where your cardano-node (relay) is running (check for hasPrometheus in config.json; eg: 127.0.0.1 if same machine as cardano-node)\n#-p port Port at which your cardano-node is exporting stats (check for hasPrometheus in config.json; eg: 12798)\n./setup_mon.sh\n# \n# Downloading prometheus v2.18.1...\n# Downloading grafana v7.0.0...\n# Downloading exporter v0.18.1...\n# Downloading grafana dashboard(s)...\n# - SKYLight Monitoring Dashboard\n# - IOHK Monitoring Dashboard\n# \n# NOTE: Could not create directory as rdlrt, attempting sudo ..\n# NOTE: No worries, sudo worked !! Moving on ..\n# Configuring components\n# Registering Prometheus as datasource in Grafana..\n# Creating service files as root..\n# \n# =====================================================\n# Installation is completed\n# =====================================================\n# \n# - Prometheus (default): http://127.0.0.1:9090/metrics\n# Node metrics: http://127.0.0.1:12798\n# Node exp metrics: http://127.0.0.1:9091\n# - Grafana (default): http://0.0.0.0:5000\n# \n# \n# You need to do the following to configure grafana:\n# 0. The services should already be started, verify if you can login to grafana, and prometheus. If using 127.0.0.1 as IP, you can check via curl\n# 1. Login to grafana as admin/admin (http://0.0.0.0:5000)\n# 2. Add \"prometheus\" (all lowercase) datasource (http://127.0.0.1:9090)\n# 3. Create a new dashboard by importing dashboards (left plus sign).\n# - Sometimes, the individual panel's \"prometheus\" datasource needs to be refreshed.\n# \n# Enjoy...\n# \n# Cleaning up...\n
"},{"location":"Appendix/monitoring/#view-dashboards","title":"View Dashboards","text":"You should now be able to Login to grafana dashboard, using the public IP of your server, at port 5000. The initial credentials to login would be admin/admin, and you will be asked to update your password upon first login. Once logged on, you should be able to go to Manage > Dashboards
and select the dashboard you'd like to view. Note that if you've just started the server, you might see graphs as empty, as initial interval for dashboards is 12 hours. You can change it to 5 minutes by looking at top right section of the page.
Thanks to Pal Dorogi for the original setup instructions used for modifying.
"},{"location":"Appendix/postgres/","title":"Sample Postgres Setup","text":"These deployment instructions used for reference while building cardano-db-sync tool, with the scope being ease of set up, and some tuning baselines for those who are new to Postgres DB. It is recommended to customise these as per your needs for Production builds.
Important
You'd find it pretty useful to set up ZFS on your system prior to setting up Postgres, to help with your IOPs throughput requirements. You can find sample install instructions here. You can set up your entire root mount to be on ZFS, or you can opt to mount a file as ZFS on \"${CNODE_HOME}\"
"},{"location":"Appendix/postgres/#install-postgresql-server","title":"Install PostgreSQL Server","text":"Execute commands below to set up Postgres Server
# Determine OS platform\nOS_ID=$( (grep -i ^ID_LIKE= /etc/os-release || grep -i ^ID= /etc/os-release) | cut -d= -f 2)\nDISTRO=$(grep -i ^NAME= /etc/os-release | cut -d= -f 2)\n\nif [ -z \"${OS_ID##*debian*}\" ]; then\n#Debian/Ubuntu\nwget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -\n RELEASE=$(lsb_release -cs)\necho \"deb [arch=amd64] http://apt.postgresql.org/pub/repos/apt/ ${RELEASE}\"-pgdg main | sudo tee /etc/apt/sources.list.d/pgdg.list\n sudo apt-get update\n sudo apt-get -y install postgresql-15 postgresql-server-dev-15 postgresql-contrib libghc-hdbc-postgresql-dev\n sudo systemctl restart postgresql\n sudo systemctl enable postgresql\nelse\necho \"We have no automated procedures for this ${DISTRO} system\"\nfi\n
"},{"location":"Appendix/postgres/#create-user-in-postgres","title":"Create User in Postgres","text":"Login to Postgres instance as superuser:
echo $(whoami)\n# <user>\nsudo su postgres\npsql\n
Note the returned as the output of echo $(whoami)
command. Replace all instance of in the documentation below. Execute the below in psql prompt. Replace and PasswordYouWant with your OS user (output of echo $(whoami)
command executed above) and a password you'd like to authenticate to Postgres with:
CREATE ROLE <user> SUPERUSER LOGIN;\nALTER USER <user> PASSWORD 'PasswordYouWant';\n\\q\n
Type exit
at shell to return to your user from postgres"},{"location":"Appendix/postgres/#verify-login-to-postgres-instance","title":"Verify Login to postgres instance","text":"export PGPASSFILE=$CNODE_HOME/priv/.pgpass\necho \"/var/run/postgresql:5432:cexplorer:*:*\" > $PGPASSFILE\nchmod 0600 $PGPASSFILE\npsql postgres\n# psql (15.0)\n# Type \"help\" for help.\n# \n# postgres=#\n
"},{"location":"Appendix/postgres/#tuning-your-instance","title":"Tuning your instance","text":"Before you start populating your DB instance using dbsync data, now might be a good time to put some thought on to baseline configuration of your postgres instance by editing /etc/postgresql/15/main/postgresql.conf
. Typically, you might find a lot of common standard practices parameters available in tuning guides. For our consideration, it would be nice to start with some baselines - for which we will use inputs from example here, which would need to be customised further to your environment and resources.
In a typical Koios [gRest] setup, we use below for minimum viable specs (i.e. 64GB RAM, > 8 CPUs, >16K IOPs for ioping -q -S512M -L -c 10 -s8k .
output when postgres data directory is on ZFS configured with max arc of 4GB), we find the below configuration to be the best common setup:
In addition to above, due to the nature of usage by dbsync (synching from node and restart traversing back to last saved ledger-state snapshot), we leverage data retention on blockchain - as we're not affected by loss of volatile information upon a restart of instance. Thus, we can relax some of the data retention and protection against corruption related settings, as those are IOPs/CPU Load Average impacts that the instance does not need to spend. We'd recommend setting 3 of those below in your /etc/postgresql/15/main/postgresql.conf
:
Once your changes are done, ensure to restart postgres service using sudo systemctl restart postgresql
.
Important
An average pool operator may not require cardano-db-sync at all. Please verify if it is required for your use as mentioned here.
PGPASSFILE
environment variable is set as per the instructions in the sample guide, for db-sync
to be able to connect.Execute the below to clone the cardano-db-sync
repository to $HOME/git
folder on your system:
cd ~/git\ngit clone https://github.com/input-output-hk/cardano-db-sync\ncd cardano-db-sync\n
"},{"location":"Build/dbsync/#build-cardano-db-sync","title":"Build Cardano DB Sync","text":"You can use the instructions below to build the latest release of cardano-db-sync
.
git fetch --tags --all\ngit pull\n# Include the cardano-crypto-praos and libsodium components for db-sync\n# On CentOS 7 (GCC 4.8.5) we should also do\n# echo -e \"package cryptonite\\n flags: -use_target_attributes\" >> cabal.project.local\n# Replace tag against checkout if you do not want to build the latest released version\ngit checkout $(curl -s https://api.github.com/repos/input-output-hk/cardano-db-sync/releases/latest | jq -r .tag_name)\n# Use `-l` argument if you'd like to use system libsodium instead of IOG fork of libsodium while compiling\n$CNODE_HOME/scripts/cabal-build-all.sh\n
The above would copy the cardano-db-sync
binary into ~/.local/bin
folder."},{"location":"Build/dbsync/#prepare-db-for-sync","title":"Prepare DB for sync","text":"Now that binaries are available, let's create our database (when going through breaking changes, you may need to use --recreatedb
instead of --createdb
used for the first time. Again, we expect that PGPASSFILE
environment variable is already set (refer to the top of this guide for sample instructions):
cd ~/git/cardano-db-sync\n# scripts/postgresql-setup.sh --dropdb #if exists already, will fail if it doesnt - thats OK\nscripts/postgresql-setup.sh --createdb\n# Password:\n# Password:\n# All good!\n
Verify you can see \"All good!\" as above!
"},{"location":"Build/dbsync/#create-symlink-to-schema-folder","title":"Create Symlink to schema folder","text":"DBSync instance requires the schema files from the git repository to be present and available to the dbsync instance. You can either clone the ~/git/cardano-db-sync/schema
folder OR create a symlink to the folder and make it available to the startup command we will be using. We will use the latter in sample below:
ln -s ~/git/cardano-db-sync/schema $CNODE_HOME/guild-db/schema\n
"},{"location":"Build/dbsync/#restore-using-snapshot","title":"Restore using Snapshot","text":"If you're running a mainnet/preview/preprod instance of dbsync, you might want to consider use of dbsync snapshots as documented here. The snapshot files as of recent epoch are available via links in release notes.
At high-level, this would involve steps as below (read and update paths as per your environment):
# Replace the actual link below with the latest one from release notes\nwget https://update-cardano-mainnet.iohk.io/cardano-db-sync/13/db-sync-snapshot-schema-13-block-7622755-x86_64.tgz\nrm -rf ${CNODE_HOME}/guild-db/ledger-state ; mkdir -p ${CNODE_HOME}/guild-db/ledger-state\ncd -; cd ~/git/cardano-db-sync\nscripts/postgresql-setup.sh --restore-snapshot /tmp/dbsyncsnap.tgz ${CNODE_HOME}/guild-db/ledger-state\n# The restore may take a while, please be patient and do not interrupt the restore process. Once restore is successful, you may delete the downloaded snapshot as below:\n# rm -f /tmp/dbsyncsnap.tgz\n
"},{"location":"Build/dbsync/#test-running-dbsync-manually-at-terminal","title":"Test running dbsync manually at terminal","text":"In order to verify that you can run dbsync, before making a start - you'd want to ensure that you can run it interactively once. To do so, try the commands below:
cd $CNODE_HOME/scripts\nexport PGPASSFILE=$CNODE_HOME/priv/.pgpass\n./dbsync.sh\n
You can monitor logs if needed via parallel session using tail -10f $CNODE_HOME/logs/dbsync.json
. If there are no error, you would want to press Ctrl-C to stop the dbsync.sh execution and deploy it as a systemd service. To do so, use the commands below (the creation of file is done using sudo
permissions, but you can always deploy it manually):
cd $CNODE_HOME/scripts\n./dbsync.sh -d\n# Deploying cnode-dbsync.service as systemd service..\n# cnode-dbsync.service deployed successfully!!\n
Now to start dbsync instance, you can run sudo systemctl start cnode-dbsync
Note
Note that dbsync while syncs, it might defer creation of indexes/constraints to speed up initial catch up. Once relatively closer to tip, this will initiate creation of indexes - which can take a while in background. Thus, you might notice the query timings right after reaching to tip might not be as good.
"},{"location":"Build/dbsync/#update-dbsync","title":"Update DBSync","text":"Updating dbsync can have different tasks depending on the versions involved. We attempt to briefly explain the tasks involved:
sudo systemctl stop cnode-dbsync
)Go to your git folder, pull and checkout to latest version as in example below (if you were to switch to 13.1.1.3
):
cd ~/git/cardano-db-sync\ngit pull\ngit checkout 13.1.1.3\n
If going through major version update (eg: 13.x.x.x to 14.x.x.x), you might need to rebuild and resync db from scratch, you may still follow the section to restore using snapshot to save some time (as long as you use a compatible snapshot).
cardano-node
version has changed (specifically if it's ledger-state
schema is different), you'd also need to clear the ledger-state directory (eg: rm -rf $CNODE_HOME/guild-db/ledger-state
)dbsync.sh
starts up fine manually as described above. If it does, stop it and go ahead with startup of systemd service (i.e. sudo systemctl start cnode-dbsync
)To validate, connect to your postgres
instance and execute commands as per below:
export PGPASSFILE=$CNODE_HOME/priv/.pgpass\npsql cexplorer\n
You should be at the psql
prompt, you can check the tables and verify they're populated:
\\dt\nselect * from meta;\n
A sample output of the above two commands may look like below (the number of tables and names may vary between versions):
cexplorer=# \\dt\nList of relations\n Schema | Name | Type | Owner\n--------+---------------------------+-------+-------\n public | ada_pots | table | centos\n public | admin_user | table | centos\n public | block | table | centos\n public | delegation | table | centos\n public | delisted_pool | table | centos\n public | epoch | table | centos\n public | epoch_param | table | centos\n public | epoch_stake | table | centos\n public | ma_tx_mint | table | centos\n public | ma_tx_out | table | centos\n public | meta | table | centos\n public | orphaned_reward | table | centos\n public | param_proposal | table | centos\n public | pool_hash | table | centos\n public | pool_meta_data | table | centos\n public | pool_metadata | table | centos\n public | pool_metadata_fetch_error | table | centos\n public | pool_metadata_ref | table | centos\n public | pool_owner | table | centos\n public | pool_relay | table | centos\n public | pool_retire | table | centos\n public | pool_update | table | centos\n public | pot_transfer | table | centos\n public | reserve | table | centos\n public | reserved_ticker | table | centos\n public | reward | table | centos\n public | schema_version | table | centos\n public | slot_leader | table | centos\n public | stake_address | table | centos\n public | stake_deregistration | table | centos\n public | stake_registration | table | centos\n public | treasury | table | centos\n public | tx | table | centos\n public | tx_in | table | centos\n public | tx_metadata | table | centos\n public | tx_out | table | centos\n public | withdrawal | table | centos\n(37 rows)\n\n\n\nselect * from meta;\n id | start_time | network_name\n----+---------------------+--------------\n 1 | 2017-09-23 21:44:51 | mainnet\n(1 row)\n
"},{"location":"Build/graphql/","title":"Graphql","text":"!> We have stopped maintaining documentation for Cardano-GraphQL, and prefer use of PostgREST instead. The specific component does not follow the process/technology/language (requires npm, yarn) used by other components (cabal/stack), and the value provided by cardano-graphql
over the (haskell-based) hasura instance has been negligible. Also, an average pool operator may not require cardano-graphql at all, please verify if it is required for your use as mentioned here. The instructions below are out of date
.
Ensure the Pre-Requisites are in place before you proceed.
"},{"location":"Build/graphql/#build-hasura-graphql-engine","title":"Build Hasura graphql-engine","text":"Going with the spirit of the documentation here, instruction to build the graphql-engine binary :)
cd ~/git\ngit clone https://github.com/hasura/graphql-engine\ncd graphql-engine/server\n$CNODE_HOME/scripts/cabal-build-all.sh\n
This should make graphql-engine
available at ~/.local/bin."},{"location":"Build/graphql/#build-cardano-graphql","title":"Build cardano-graphql","text":"The build will fail if you are running a version of node.js earlier than 10.0.0 (which could happen if you have a conflicting version in your $PATH). You can verify your node version by executing the below:
#check your version of node.js\nnode -v\n#if response is 10.0.0 or higher build can proceed. \n
The commands below will help you compile the cardano-graphql node:
cd ~/git\ngit clone https://github.com/input-output-hk/cardano-graphql\ncd cardano-graphql\ngit checkout v1.1.1\nyarn\n#yarn install v1.22.4\n# [1/4] Resolving packages...\n# [2/4] Fetching packages...\n# info fsevents@2.1.2: The platform \"linux\" is incompatible with this module.\n# info \"fsevents@2.1.2\" is an optional dependency and failed compatibility check. Excluding it from installation.\n# info fsevents@1.2.12: The platform \"linux\" is incompatible with this module.\n# info \"fsevents@1.2.12\" is an optional dependency and failed compatibility check. Excluding it from installation.\n# [3/4] Linking dependencies...\n# warning \" > graphql-type-datetime@0.2.4\" has incorrect peer dependency \"graphql@^0.13.2\".\n# warning \" > @typescript-eslint/eslint-plugin@1.13.0\" has incorrect peer dependency \"eslint@^5.0.0\".\n# warning \" > @typescript-eslint/parser@1.13.0\" has incorrect peer dependency \"eslint@^5.0.0\".\n# [4/4] Building fresh packages...\n# Done in 20.70s.\nyarn build\n# yarn run v1.22.4\n# $ yarn codegen:internal && yarn codegen:external && tsc -p . && shx cp src/schema.graphql dist/\n# $ graphql-codegen\n# \u2714 Parse configuration\n# \u2714 Generate outputs\n# $ graphql-codegen --config ./codegen.external.yml\n# \u2714 Parse configuration\n# \u2714 Generate outputs\n# Done in 38.11s.\ncd dist\nrsync -arvh ../node_modules ./\n
"},{"location":"Build/graphql/#set-up-environment-for-cardano-graphql","title":"Set up environment for cardano-graphql","text":"cardano-graphql requires cardano-node, cardano-db-sync-extended, postgresql and graphql-engine to be set up and running. The below will help you map the components:
export PGPASSFILE=$CNODE_HOME/priv/.pgpass\nIFS=':' read -r -a PGPASS <<< $(cat $PGPASSFILE)\nexport HASURA_GRAPHQL_ENABLE_TELEMETRY=false # Optional. To send usage data to Hasura, set to true.\nexport HASURA_GRAPHQL_DATABASE_URL=postgres://${PGPASS[3]}:${PGPASS[4]}@${PGPASS[0]}:${PGPASS[1]}/${PGPASS[2]}\nexport HASURA_GRAPHQL_ENABLE_CONSOLE=true\nexport HASURA_GRAPHQL_ENABLED_LOG_TYPES=\"startup, http-log, webhook-log, websocket-log, query-log\"\nexport HASURA_GRAPHQL_SERVER_PORT=4080\nexport HASURA_GRAPHQL_SERVER_HOST=0.0.0.0\nexport CACHE_ENABLED=true\nexport HASURA_URI=http://127.0.0.1:4080\ncd ~/git/cardano-graphql/dist\ngraphql-engine serve &\nnode index.js\n
"},{"location":"Build/grest-changelog/","title":"Koios gRest Changelog","text":""},{"location":"Build/grest-changelog/#110rc-for-all-networks","title":"[1.1.0rc] - For all networks.","text":"This will be first major [breaking] release for Koios consumers in a while, and will be rolled out under new base prefix (/api/v1
). The major work with this release was to start making use of newer flags in dbsync which help performance of queries under new endpoints. Also, you'd see quite a few new endpoint additions below, that'd be helping out with slightly lighter version of queries. To keep migration paths easier, we will ensure both v0 and v1 versions of the release is up for a month post release, before retiring v0.
/pool_registrations
- List of all pool registrations initiated in the requested epoch #239/pool_retirements
- List of all pool retirements initiated in the requested epoch #239/treasury_withdrawals
- List of withdrawals made from treasury #239/reserve_withdrawals
- List of withdrawals made from reserves (MIRs) #239/account_txs
- Transactions associated with a given stake address #239/address_utxos
- Get UTxO details for requested addresses #239/asset_utxos
- Get UTxO details for requested assets #239/script_utxos
- Get UTxO details for requested script hashes #239/utxo_info
- Details for requested UTxO arrays #239/script_info
- Information about a given script FROM script hashes #239/ogmios/
- Expose stateless ogmios endpoints #1690/account_utxos
, /credential_utxos
- Accept extended
as an additional flag - which enables asset_list
, reference_script
and inline_datum
to the output #239/block_txs
- Flatten output with transaction details (tx_hash
, epoch_no
, block_height
, block_time
) instead of tx_hashes
array #239/epoch_params
- Update cost_models
to JSON (upstream change in node) #239/account_assets
, /address_assets
- Flatten the output result (instead of asset_list
array) making it easier to apply horizontal filtering based on any of the fields/account_utxos
, /address_utxos
, /asset_utxos
, /script_utxos
and /utxo_info
to return same schema giving complete details about UTxOs involved, with few fields toggled based on extended
input flag #239/pool_list
- Add various details to the endpoint for each pool (pool_id_hex
,active_epoch_no
,margin
,fixed_cost
,pledge
,reward_addr
,owners
,relays
,ticker
,meta_url
,meta_hash
,pool_status
,retiring_epoch
) - this should mean some of the requests to pool_info
should no longer be required #239/pool_updates
- In v0, pool_updates
only provided pool registration updates, while pool_status
corresponded to current status of pool. With v1, we will have registration as well as deregistration transactions, and each transaction will have update_type
(enum of either registration
or deregistration
) instead of pool_status
. As a side-effect, since a registration transaction only has retiring_epoch
as metadata, all the other fields will show up as null
for such a transaction #239/pool_metadata
, /pool_relays
- Add pool_status
field to denote whether pool is retired #239/datum_info
- Rename hash
to datum_hash
and add creation_tx_hash
#239/native_script_list
- Remove script
column (as it has pretty large output better queried against script_info
), add size
and change type
to text #239/plutus_script_list
- Add type
and size
to output #239/asset_info
- Add cip68_metadata
JSONB field #239/pool_history
- Add member_rewards #225/tx_utxos
- No longer required as replaced by /utxo_info
#239v1
from v0
#1690epoch_info_cache
Remove protocol parameters, as they can be queried from live table. Accordingly update dependent queries #239consumed_by_tx_in_id
column in tx_out
from dbsync 13.1.1.3 across endpoints #239_last_active_stake_validated_epoch
in active_stake_cache #222The release is effectively same as 1.0.10rc
except with one minor modification below.
cs.[{\"key\":\"value\"}]
in PostgREST #172This release primarily focuses on ability to support better DeFi projects alongwith some value addition for existing clients by bringing in 10 new endpoints (paired with 2 deprecations), and few additional optional input parameters , and some additional output columns to existing endpoints. The only breaking change/fix is for output returned for tx_info
.
Also, dbsync 13.1.x.x has been released and is recommended to be used for this release
"},{"location":"Build/grest-changelog/#new-endpoints-added_1","title":"New endpoints added","text":"/asset_addresses
- Equivalent of deprecated /asset_address_list
#149/asset_nft_address
- Returns address where the specified NFT sits on #149/account_utxos
- Returns brief details on non-empty UTxOs associated with a given stake address #149/asset_info_bulk
- Bulk version of /asset_info
#142/asset_token_registry
- Returns assets registered via token registry on github #145/credential_utxos
- Returns UTxOs associated with a payment credential #149/param_updates
- Returns list of parameter update proposals applied to the network #149/policy_asset_addresses
- Returns addresses with quantity for each asset on a given policy #149/policy_asset_info
- Equivalent of deprecated /asset_policy_info
but with more details in output #149/policy_asset_list
- Returns list of asset under the given policy (including supply) #142, #149/account_addresses
- Add optional _first_only
and _empty
flags to show only first address with tx or to include empty addresses to output #149/epoch_info
- Add optional _include_next_epoch
field to show next epoch stats if available (eg: nonce, active stake) #143/account_assets
, /address_assets
, /address_info
, /tx_info
, /tx_utxos
- Add decimals
to output #142/policy_asset_info
- Add minting_tx_hash
, total_supply
, mint_cnt
, burn_cnt
and creation_time
fields to the output #149/tx_info
- Change _invalid_before
and _invalid_after
to text field #141tx_info
- Remove the field plutus_contracts
> [array] > outputs
as there is no logic to connect it to inputs spending #163/asset_address_list
- Renamed to asset_addresses
keeping naming line with other endpoints (old one still present, but will be deprecated in future release) #149/asset_policy_info
- Renamed to policy_asset_info
keeping naming line with other endpoints (old one still present, but will be deprecated in future release) #149/epoch_info
, /epoch_params
- Restrict output to current epoch #149/block_info
- Use /previous_id
field to show previous/next blocks (previously was using block_id/height) #145/asset_info
/asset_policy_info
- Fix mint tx data to be latest #141grest.asset_info_cache
to hold mint/burn counts alongwith first/last mint tx/keys #142/pool_delegators
output column latest_delegation_tx_hash
#149authenticator
user, whose default statement_timeout
is set to 65s and update configs accordingly #1606This release is effectively same as 1.0.9rc
below (please check out the notes accordingly), just with minor bug fix on setup-grest.sh
itself.
This release candidate is non-breaking for existing methods and inputs, but breaking for output objects for endpoints. The aim with release candidate version is to allow folks couple of weeks to test, adapt their libraries before applying to mainnet.
"},{"location":"Build/grest-changelog/#new-endpoints-added_2","title":"New endpoints added","text":"datum_info
- List of datum information for given datum hashesaccount_info_cached
- Same as account_info
, but serves cached information instead of live oneaddress_info
, address_assets
, account_assets
, tx_info
, asset_list
asset_summary
to align output asset_list
object to return array of policy_id
, asset_name
, fingerprint
(and quantity
, minting_txs
where applicable) #120asset_history
- Fix metadata to wrap in array to refer to right object #122asset_txs
- Add optional boolean parameter _history
(default: false
) to toggle between querying current UTxO set vs entire history for asset #122pool_history
- fixed_cost
, pool_fees
, deleg_rewards
, epoch_ros
will be returned as 0 when null #122tx_info
- plutus_contracts->outputs
can be null #122guild-operators
repository to koios-artifacts
repository. This is to ensure that the updates made to scripts and other tooling do not have a dependency on Koios query versioning #122block_info
- Use block_no
instead of id
to check for previous/next block hash #122This release is contains minor bug-fixes that were discovered in koios-1.0.7. No major changes to output for this one.
"},{"location":"Build/grest-changelog/#changes-for-api","title":"Changes for API","text":""},{"location":"Build/grest-changelog/#new-endpoints-added_3","title":"New endpoints added","text":"tx_info
and tx_metadata
- Align metadata for JSON output format #1542blocks
- Query Output aligned to specs (epoch
=> epoch_no
)epoch_block_protocols
- [ ** Specs only ** ] Fix Documentation schema , which was accidentally showing wrong outputpool_delegators_history
- List all epochs instead of current, if no _epoch_no
is specified #1545asset_info
- Fix metadata aggregaton for minting transactions with multiple metadata keys #1543stake_distribution_new_accounts
- Leftover reference for account_info
which now accepts array, resulted in error to populate stake distribution cache for new accounts #1541grest-poll.sh
- Remove query view section from polling script, and remove grestrpcs re-creation per hour (it's already updated when setup-grest.sh
is run) , in preparation for #1545This release continues updates from koios-1.0.6 to further utilise stake-snapshot cache tables which would be useful for SPOs as well as reduce downtime post epoch transition. One largely requested feature to accept bulk inputs for many block/address/account endpoints is now complete. Additionally, koios instance providers are now recommended to use cardano-node 1.35.3 with dbsync 13.0.5.
"},{"location":"Build/grest-changelog/#changes-for-api_1","title":"Changes for API","text":""},{"location":"Build/grest-changelog/#new-endpoints-added_4","title":"New endpoints added","text":"pool_delegators_history
- Provides historical record for pool's delegators #1486pool_stake_snapshot
- Provides mark, set and go snapshot values for pool being queried. #1489pool_delegators
- No longer accepts _epoch_no
as parameter, as it only returns live delegators. Additionally provides latest_delegation_hash
in output. #1486tx_info
- epoch
=> epoch_no
#1494tx_info
- Change collateral_outputs
(array) to collateral_output
(object) as collateral output is only singular in current implementation #1496address_info
- Add inline_datum
and reference_script
to output #1500pool_info
- Add sigma
field to output #1511pool_updates
- Add historical metadata information to output #1503_stake_address text
becomes _stake_addresses text[]
). The additional changes in output as below:block_txs
- Now returns block_hash
and array of tx_hashes
address_info
- Additional field address
returned in outputaddress_assets
- Now returns address
and an array of assets
JSONaccount_addresses
- Accepts stake_addresses
array and outputs stake_address
and array of addresses
account_assets
- Accepts stake_addresses
array and outputs stake_address
and array of assets
JSONaccount_history
- Accepts stake_addresses
array alongwith epoch_no
integer and outputs stake_address
and array of history
JSONaccount_info
- Accepts stake_addresses
array and returns additional field stake_address
to outputaccount_rewards
- Now returns stake_address
and an array of rewards
JSONaccount_updates
- Now returns stake_address
and an array of updates
JSONasset_info
- Change minting_tx_metadata
from array to object #1533account_addresses
- Sort results by oldest address first #1538epoch_info_cache
- Only update last_tx_id of previous epoch on epoch transition #1490 and #1502epoch_info_cache
/ stake_snapshot_cache
- Store total snapshot stake to epoch stake cache, and active pool stake to stake snapshot cache #1485The backlog of items not being added to mainnet has been increasing due to delays with Vasil HFC event to Mainnet. As such we had to come up with a split update approach. The mainnet nodes are still not qualified to be Vasil-ready (in our opinion) for 1.35.x , but dbsync 13 can be used against node 1.34.1 fine. In order to cater for this split, we have added an intermediate koios-1.0.6m tag that brings dbsync updates while maintaining node-1.34.1.
"},{"location":"Build/grest-changelog/#changes-for-api_2","title":"Changes for API","text":""},{"location":"Build/grest-changelog/#data-output-changes","title":"Data Output Changes","text":"pool_delegators
- epoch_no
=> active_epoch_no
#1454asset_history
- Add block_time
and metadata
fields for all previous mint transactions #1468asset_info
- Retain latest mint transaction instead of first (in line with most CIPs as well as pool metadata - latest valid meta being live) #1468/tip
, /blocks
, /block_info
=> block_time
/genesis
=> systemStart
/epoch_info
=> start_time
, first_block_time
, last_block_time
, end_time
/tx_info
=> tx_timestamp
/asset_info
=> creation_time
tx_info
- Add Vasil data #1464collaterals
=> collateral_inputs
collateral_outputs
, reference_inputs
to tx_info
datum_hash
, inline_datum
, reference_script
to collateral input/outputs, reference inputs & inputs/outputs JSON.cost_model
instead of cost_model_id
referenceepoch_params
- Update leftover lovelace references to text for consistency: #1484key_deposit
pool_deposit
min_utxo_value
min_pool_cost
coins_per_utxo_size
get-metrics.sh
- Add active/idle connections to database #1459grest-poll.sh
: Bump haproxy to 2.6.1 and set default value of API_STRUCT_DEFINITION to be dependent on network used. #1450grest.account_active_stake_cache
- optimise code and delete historical view (beyond 4 epochs). [#1451(https://github.com/cardano-community/guild-operators/pull/1451)tx_metalabels
- Move metalabels from view to RPC using lose indexscan (much better performance) #1474grest.stake_snapshot_cache
- Fix rewards for new accounts #1476Since there have been a few deviations wrt Vasil for testnet and mainnet, this version only targets networks except Mainnet!
"},{"location":"Build/grest-changelog/#changes-for-api_3","title":"Changes for API","text":""},{"location":"Build/grest-changelog/#data-output-changes_1","title":"Data Output Changes","text":"/epoch_info
- Add total_rewards
and avg_block_reward
for a given epoch #43/tip
, /blocks
, /block_info
=> block_time
/genesis
=> systemStart
/epoch_info
=> start_time
, first_block_time
, last_block_time
, end_time
/tx_info
=> tx_timestamp
/asset_info
=> creation_time
/blocks
, /block_info
=> Add proto_major
and proto_minor
for a given block to output #55asset_registry_update.sh
script to rely on commit hash instead of POSIX timestamps, and performance bump. #1428epoch_no
, block_no
to /address_txs
, /credential_txs
and /asset_txs
endpoints. #1409/asset_txs
, returning transactions as an array - allows for leveraging native PostgREST filtering. #1409/pool_info
. #1414setup-grest.sh
with -r
(reset flag), as the delta registry records to insert depends on file (POSIX) timestamps. #1410grest-poll.sh
. Important
gRest is an open source implementation of a query layer built over dbsync using PostgREST and HAProxy
. The package is built as part of Koios team's efforts to unite community individual stream of work together and give back a more aligned structure to query dbsync and adopt standardisation to queries utilising open-source tooling as well as collaboration. In addition to these, there are also accessibility features to deploy rules for failover, do healthchecks, set up priorities, have ability to prevent DDoS attacks, provide timeouts, report tips for analysis over a longer period, etc - which can prove to be really useful when performing any analysis for instances.
Note
Note that the scripts below do allow for provisioning ogmios integration too, but Ogmios - currently - is not designed to provide advanced session management for a server-client architecture in absence of a middleware. Thus, the availability for ogmios from monitoring instance is restricted to avoid ability to DDoS an instance.
"},{"location":"Build/grest/#components","title":"Components","text":"PostgREST: An RPC JSON interface for any PostgreSQL database (in our case, database served via cardano-db-sync
) to provide a RESTful Web Service. The endpoints of PostgREST in itself are essentially the table/functions defined in elected schema via grest config file. You can read more about advanced query syntax using PostgREST API here, but we will provide a simpler view using examples towards the end of the page. It is an easy alternative - with almost no overhead as it directly serves the underlying database as an API, as compared to Cardano GraphQL
component (which may often have lags). Some of the other advantages of PostgREST over graphql based projects are also performance, being stateless, 0 overhead, support for JWT / native Postgres DB authentication against the Rest Interface as well.
HAProxy: An easy gateway proxy that automatically provides failover/basic DDoS protection, specify rules management for load balancing, setup multiple frontend/backends, provide easy means to have TLS enabled for public facing instances, etc. You may alter the settings for proxy layer as per your SecOps preferences. This component is optional (eg: if you prefer to expose your PostgREST server itself, you can do so using similar steps below).
To start with you'd want to ensure your current shell session has access to Postgres credentials, continuing from examples from the above mentioned Sample Postgres deployment guide.
cd $CNODE_HOME/priv\nPGPASSFILE=$CNODE_HOME/priv/.pgpass\npsql cexplorer\n
Ensure that you can connect to your Postgres DB fine using above (quit from psql once validated using \\q
). As part of guild-deploy.sh
execution, you'd find setup-grest.sh file made available in ${CNODE_HOME}/scripts
folder, which will help you automate installation of PostgREST, HAProxy as well as brings in latest queries/functions provided via Koios to your instances.
Warning
As of now, gRest services are in alpha stage - while can be utilised, please remember there may be breaking changes and every collaborator is expected to work with the team to keep their instances up-to-date using alpha branch.
Familiarise with the usage options for the setup script , the syntax can be viewed as below:
cd \"${CNODE_HOME}\"/scripts\n./setup-grest.sh -h\n#\n# Usage: setup-grest.sh [-f] [-i [p][r][m][c][d]] [-u] [-b <branch>]\n# \n# Install and setup haproxy, PostgREST, polling services and create systemd services for haproxy, postgREST and dbsync\n# \n# -f Force overwrite of all files including normally saved user config sections\n# -i Set-up Components individually. If this option is not specified, components will only be installed if found missing (eg: -i prcd)\n# p Install/Update PostgREST binaries by downloading latest release from github.\n# r (Re-)Install Reverse Proxy Monitoring Layer (haproxy) binaries and config\n# m Install/Update Monitoring agent scripts\n# c Overwrite haproxy, postgREST configs\n# d Overwrite systemd definitions\n# -u Skip update check for setup script itself\n# -q Run all DB Queries to update on postgres (includes creating grest schema, and re-creating views/genesis table/functions/triggers and setting up cron jobs)\n# -b Use alternate branch of scripts to download - only recommended for testing/development (Default: master)\n#\n
To run the setup overwriting all standard deployment tasks from a branch (eg: koios-1.0.9
branch), you may want to use:
./setup-grest.sh -f -i prmcd -r -q -b koios-1.0.9\n
Similarly - if you'd like to re-install all components and force overwrite all configs but not reset cache tables, you may run:
./setup-grest.sh -f -i prmcd -q\n
Another example could be to preserve your config, but only update queries using an alternate branch (eg: let's say you want to try the branch alpha
prior to a tagged release). To do so, you may run:
./setup-grest.sh -q -b alpha\n
Please ensure to follow the on-screen instructions, if any (for example restarting deployed services, or updating configs to specify correct target postgres URLs/enable TLS/add peers etc in ${CNODE_HOME}/priv/grest.conf
and ${CNODE_HOME}/files/haproxy.cfg
).
The default ports used will make haproxy instance available at port 8053 or 8453 if TLS is enabled (you might want to enable firewall rule to open this port to services you would like to access). If you want to prevent unauthenticated access to grest schema, uncomment the jwt-secret and specify a custom secret-token
.
Reminder
Once you've successfully deployed the grest instance, it will deploy certain cron jobs that will ensure the relevant cache tables are updated periodically. Until these have finished (especially on first run, it could take an hour or so on mainnet, your instance will likely not pass any tests from grest-poll.sh
but that's expected.
In order to enable SSL on your haproxy, all you need to do is edit the file ${CNODE_HOME}/files/haproxy.cfg
and update the frontend app section to uncomment ssl bind (and comment normal bind).
Info
If you're not familiar with how to configure TLS OR would not like to buy one, you can find tips on how to create a TLS certificate for free via LetsEncrypt using tutorials here. Once you do have a TLS Certificate generated, you need to chain the private key and full chain cert together in a file - /etc/ssl/server.pem
- which can be then referenced as below:
frontend app\n #bind 0.0.0.0:8053\n ## If using SSL, comment line above and uncomment line below\n bind :8453 ssl crt /etc/ssl/server.pem no-sslv3\n http-request set-log-level silent\n acl srv_down nbsrv(grest_postgrest) eq 0\n acl is_wss hdr(Upgrade) -i websocket\n ...\n
Restart haproxy service for changes to take effect."},{"location":"Build/grest/#validation","title":"Validation","text":"With the setup, you also have a checkstatus.sh
script, which will query the Postgres DB instance via haproxy (coming through postgREST), and only show an instance up if the latest block in your DB instance is within 180 seconds.
Important
If you'd like to participate in joining to the elastic cluster via Koios, please raise a PR request by editing topology files in this folder to do so!!
If you were using guild
network, you could do a couple of very basic sanity checks as per below:
To query active stake for pool pool1z2ry6kxywgvdxv26g06mdywynvs7jj3uemnxv273mr5esukljsr
in epoch 122
, we can execute the below:
curl -d _pool_bech32=pool1z2ry6kxywgvdxv26g06mdywynvs7jj3uemnxv273mr5esukljsr -d _epoch_no=122 -s http://localhost:8053/rpc/pool_active_stake\n## {\"active_stake_sum\" : 19409732875}\n
To check latest owner key(s) for a given pool pool1z2ry6kxywgvdxv26g06mdywynvs7jj3uemnxv273mr5esukljsr
, you can execute the below:
curl -d _pool_bech32=pool1z2ry6kxywgvdxv26g06mdywynvs7jj3uemnxv273mr5esukljsr -s http://localhost:8050/rpc/pool_owners\n## [{\"owner\" : \"stake_test1upx5p04dn3t6dvhfh27744su35vvasgaaq565jdxwlxfq5sdjwksw\"}, {\"owner\" : \"stake_test1uqak99cgtrtpean8wqwp7d9taaqkt9gkkxga05m5azcg27chnzfry\"}]\n
You may want to explore what all endpoints come out of the box, and test them out, to do so - refer to API documentation for OpenAPI3 documentation. Each endpoint has a pre-filled example for mainnet and connects by default to primary Koios endpoint, allowing you to test endpoints and if needed - grab the curl
commands to start testing yourself against your local or remote instances.
If you're interested to participate in decentralised infrastructure by providing an instance, there are a few additional steps you'd need:
Enable ports for your HAProxy instance (default: 8053), gRest Exporter service (default: 8059) and (optionally) submit API instance (default: 8090) against the monitoring instance (do not need to open these ports to internet) of corresponding network.
Ensure that each of the service above is listening on your public IP address (for instance, submitapi.sh might need to be edited to change HOSTADDR to 0.0.0.0
and restarted).
Create a PR specifying connectivity information to your HAProxy port here.
Make sure to join the telegram discussions group to participate in any discussions, actions, polls for new-features, etc. Feel free to give a shout in the group in case you have trouble following any of the above
Reminder !!
Ensure the Pre-Requisites are in place before you proceed.
"},{"location":"Build/node-cli/#build-instructions","title":"Build Instructions","text":""},{"location":"Build/node-cli/#clone-the-repository","title":"Clone the repository","text":"Execute the below to clone the cardano-node repository to $HOME/git
folder on your system:
cd ~/git\ngit clone https://github.com/input-output-hk/cardano-node\ncd cardano-node\n
"},{"location":"Build/node-cli/#build-cardano-node","title":"Build Cardano Node","text":"You can use the instructions below to build the latest release of cardano-node.
git fetch --tags --all\ngit pull\n# Replace tag against checkout if you do not want to build the latest released version\ngit checkout $(curl -s https://api.github.com/repos/input-output-hk/cardano-node/releases/latest | jq -r .tag_name)\n\n# Use `-l` argument if you'd like to use system libsodium instead of IOG fork of libsodium while compiling\n$CNODE_HOME/scripts/cabal-build-all.sh\n
The above would copy the binaries built into ~/.local/bin
folder.
While certain folks might want to build the node themselves (could be due to OS/arch compatibility, trust factor or customisations), for most it might not make sense to build the node locally. Instead, you can download the binaries using cardano-node release notes, where-in you can find the download links for every version. Once downloaded, you would want to make it available to preferred PATH
in your environment (if you're asking how - that'd mean you've skipped skillsets mentioned on homepage).
Execute cardano-cli
and cardano-node
to verify output as below (the exact version and git rev should depend on your checkout tag on github repository):
cardano-cli version\n# cardano-cli 8.1.2 - linux-x86_64 - ghc-8.10\n# git rev <...>\ncardano-node version\n# cardano-node 8.1.2 - linux-x86_64 - ghc-8.10\n# git rev <...>\n
"},{"location":"Build/node-cli/#update-port-number-or-pool-name-for-relative-paths","title":"Update port number or pool name for relative paths","text":"Before you go ahead with starting your node, you may want to update values for CNODE_PORT
in $CNODE_HOME/scripts/env
. Note that it is imperative for operational relays and pools to ensure that the port mentioned is opened via firewall to the destination your node is supposed to connect from. Update your network/firewall configuration accordingly. Future executions of guild-deploy.sh
will preserve and not overwrite these values.
CNODEBIN=\"${HOME}/.local/bin/cardano-node\"\nCCLI=\"${HOME}/.local/bin/cardano-cli\"\nCNODE_PORT=6000\nPOOL_NAME=\"GUILD\"\n
Important
POOL_NAME is the name of folder that you will use when registering pools and starting node in core mode. This folder would typically contain your hot.skey
,vrf.skey
and op.cert
files required. If the mentioned files are absent, the node will automatically start in a passive mode. Note that in case CNODE_PORT is changed, you'd want to re-do the deployment of systemd service as mentioned later in the guide
To test starting the node in interactive mode, you can use the pre-built script below (cnode.sh
) (note that your node logs are being written to $CNODE_HOME/logs
folder, you may not see much output beyond Listening on http://127.0.0.1:12798
). This script automatically determines whether to start the node as a relay or block producer (if the required pool keys are present in the $CNODE_HOME/priv/pool/<POOL_NAME>
as mentioned above). The script contains a user-defined variable CPU_CORES
which determines the number of CPU cores the node will use upon start-up:
######################################\n# User Variables - Change as desired #\n# Common variables set in env file #\n######################################\n\n#CPU_CORES=2 # Number of CPU cores cardano-node process has access to (please don't set higher than physical core count, 2-4 recommended)\n
You can uncomment this and set to the desired number, but be wary not to go above your physical core count. cd \"${CNODE_HOME}\"/scripts\n./cnode.sh\n
Ensure you do not have any errors in the console. To stop the node, hit Ctrl-C - we will start the node as systemd later in the document.
"},{"location":"Build/node-cli/#modify-the-node-to-p2p-mode","title":"Modify the node to P2P mode","text":"Note
The section below only refer to mainnet, as Guildnet/Preview/Preprod templates already come with P2P as default mode, and do not require steps below
In case you prefer to start the node in P2P mode (ideally, only on relays), you can do so by replacing the config.json and topology.json files in $CNODE_HOME/files
folder. You can find a sample of these two files that can be downloaded using commands below:
cd \"${CNODE_HOME}\"/files\nmv config.json config.json.bkp_$(date +%s)\nmv topology.json topology.json.bkp_$(date +%s)\ncurl -sL -f \"https://raw.githubusercontent.com/cardano-community/guild-operators/master/files/config-mainnet.p2p.json\" -o config.json\ncurl -sL -f \"https://raw.githubusercontent.com/cardano-community/guild-operators/alpha/files/topology-mainnet.json\" -o topology.json\n
Once downloaded, you'd want to update config.json (if you want to update any port/path references or change tracers from default) and the topology.json file to include your core/relay nodes in localRoots
section (replacing dummy values currently in place with \"127.0.0.1\"
address. The P2P topology file provides you few public nodes as a fallback to avoid single point of reliance, being IO provided mainnet nodes. You can also remove/update any additional peers as per your preference.
Once updated, since you modified the file manually - there is always a chance of human errors (eg: missing comma/quotes). Thus, we would recommend you to start the node interactively once again before proceeding.
cd \"${CNODE_HOME}\"/scripts\n./cnode.sh\n
Ensure you do not have any errors in the console. To stop the node, hit Ctrl-C - we will start the node as systemd later in the document.
Note
An average pool operator may not require cardano-submit-api
at all. Please verify if it is required for your use as mentioned here. If - however - you do run submit-api for accepting sizeable transaction load, you would want to override the default MEMPOOL_BYTES by uncommenting it in cnode.sh.
cardano-submit-api
is one of the binaries built as part of cardano-node
repository and allows you to submit transactions over a Web API. To run this service interactively, you can use the pre-built script below (submitapi.sh
). Make sure to update submitapi.sh
script to change listen IP or Port that you'd want to make this service available on.
cd $CNODE_HOME/scripts\n./submitapi.sh\n
To stop the process, hit Ctrl-C
"},{"location":"Build/node-cli/#systemd","title":"Run as systemd service","text":"The preferred way to run the node (and submit-api) is through a service manager like systemd. This section explains how to setup a systemd service file.
1. Deploy as a systemd service Execute the below command to deploy your node as a systemd service (from the respective scripts folder):
cd $CNODE_HOME/scripts\n./cnode.sh -d\n# Deploying cnode.service as systemd service..\n# cnode.service deployed successfully!!\n\n./submitapi.sh -d\n# Deploying cnode-submit-api.service as systemd service..\n# cnode-submit-api deployed successfully!!\n
2. Start the service Run below commands to enable automatic start of service on startup and start it.
sudo systemctl start cnode.service\nsudo systemctl start cnode-submit-api.service\n
3. Check status and stop/start commands Replace status
with stop
/start
/restart
depending on what action to take.
sudo systemctl status cnode.service\nsudo systemctl status cnode-submit-api.service\n
Important
In case you see the node exit unsuccessfully upon checking status, please verify you've followed the transition process correctly as documented below, and that you do not have another instance of node already running. It would help to check your system logs (/var/log/syslog
for debian-based and /var/log/messages
for Red Hat/CentOS/Fedora systems, you can also check journalctl -f -u <service>
to examine startup attempt for services) for any errors while starting node.
You can use gLiveView to monitor your node that was started as a systemd service.
cd $CNODE_HOME/scripts\n./gLiveView.sh\n
"},{"location":"Build/offchain-metadata-tools/","title":"Offchain Metadata Tools","text":"Important
In the Cardano multi-asset era, this project helps you create and submit metadata describing your assets, storing them off-chain.
"},{"location":"Build/offchain-metadata-tools/#download-pre-built-binaries","title":"Download pre-built binaries","text":"Go to input-output-hk/offchain-metadata-tools to download the binaries and place in a directory specified by PATH
, e.g. $HOME/.local/bin/
.
An alternative to pre-built binaries - instructions describe how to build the token-metadata-creator
tool but the offchain-metadata-tools repository contains other tools as well. Build the ones needed for your installation.
Execute the below to clone the offchain-metadata-tools repository to $HOME/git folder on your system:
cd ~/git\ngit clone https://github.com/input-output-hk/offchain-metadata-tools.git\ncd offchain-metadata-tools/token-metadata-creator\n
"},{"location":"Build/offchain-metadata-tools/#build-token-metadata-creator","title":"Build token-metadata-creator","text":"You can use the instructions below to build token-metadata-creator
, same steps can be executed in future to update the binaries (replacing appropriate tag) as well.
git fetch --tags --all\ngit pull\n# Replace master with appropriate tag if you'd like to avoid compiling against master\ngit checkout master\n$CNODE_HOME/scripts/cabal-build-all.sh\n
The above would copy the binaries into ~/.local/bin
folder."},{"location":"Build/offchain-metadata-tools/#verify","title":"Verify","text":"Verify that the tool is executable from anywhere by running:
token-metadata-creator -h\n
"},{"location":"Build/wallet/","title":"Wallet","text":"!> - An average pool operator may not require cardano-wallet
at all. Please verify if it is required for your use as mentioned here.
Ensure the Pre-Requisites are in place before you proceed.
"},{"location":"Build/wallet/#build-instructions","title":"Build Instructions","text":"Follow instructions below for building the cardano-wallet binary:
"},{"location":"Build/wallet/#clone-the-repository","title":"Clone the repository","text":"Execute the below to clone the cardano-wallet
repository to $HOME/git
folder on your system:
cd ~/git\ngit clone https://github.com/input-output-hk/cardano-wallet\ncd cardano-wallet\n
"},{"location":"Build/wallet/#build-cardano-wallet","title":"Build Cardano Wallet","text":"You can use the instructions below to build the latest release of cardano-wallet.
!> - Note that the latest release of cardano-wallet
may not work with the latest release of cardano-node
. Please check the compatibility of each cardano-wallet
release yourself in the official docs, e.g. https://github.com/input-output-hk/cardano-wallet/releases/latest.
git fetch --tags --all\ngit pull\n# Replace tag against checkout if you do not want to build the latest released version\ngit checkout $(curl -s https://api.github.com/repos/input-output-hk/cardano-wallet/releases/latest | jq -r .tag_name)\n$CNODE_HOME/scripts/cabal-build-all.sh\n
The above would copy the binaries into ~/.local/bin
folder.
You can run the below to connect to a cardano-node
instance that is expected to be already running and the wallet will start syncing.
cardano-wallet serve /\n --node-socket $CNODE_HOME/sockets/node0.socket /\n --mainnet / # if using the testnet flag you also need to specify the testnet shelley-genesis.json file\n--database $CNODE_HOME/priv/wallet\n
"},{"location":"Build/wallet/#verify-the-wallet-is-handling-requests","title":"Verify the wallet is handling requests","text":"cardano-wallet network information\n
Expected output should be similar to the following Ok.\n{\n\"network_tip\": {\n\"time\": \"2021-06-01T17:31:05Z\",\n\"epoch_number\": 269,\n\"absolute_slot_number\": 31002374,\n\"slot_number\": 157574\n},\n\"node_era\": \"mary\",\n\"node_tip\": {\n\"height\": {\n\"quantity\": 5795127,\n\"unit\": \"block\"\n},\n\"time\": \"2021-06-01T17:31:00Z\",\n\"epoch_number\": 269,\n\"absolute_slot_number\": 31002369,\n\"slot_number\": 157569\n},\n\"sync_progress\": {\n\"status\": \"ready\"\n},\n\"next_epoch\": {\n\"epoch_start_time\": \"2021-06-04T21:44:51Z\",\n\"epoch_number\": 270\n}\n}\n
"},{"location":"Build/wallet/#creatingrestoring-wallet","title":"Creating/Restoring Wallet","text":"If you're creating a new wallet, you'd first want to generate a mnemonic for use (see below):
cardano-wallet recovery-phrase generate\n# false brother typical saddle settle phrase foster sauce ask sunset firm gate service render burger\n
You can use the above mnemonic to then restore a wallet as per below: cardano-wallet wallet create from-recovery-phrase MyWalletName\n
"},{"location":"Build/wallet/#expected-output","title":"Expected output:","text":"Please enter a 15\u201324 word recovery phrase: false brother typical saddle settle phrase foster sauce ask sunset firm gate service render burger\n(Enter a blank line if you do not wish to use a second factor.)\nPlease enter a 9\u201312 word second factor:\nPlease enter a passphrase: **********\nEnter the passphrase a second time: **********\nOk.\n{\n ...\n}\n
"},{"location":"Scripts/blockperf/","title":"BlockPerf","text":"Reminder !!
Ensure the Pre-Requisites are in place before you proceed.
blockPerf.sh
is a script to monitor the network propagation of new blocks as seen by the local cardano-node.
Although blockPerf can also run on the block producer, it makes the most sense to run it on the upstream relays. There it waits for each new block announced to the relay over the network by its remote peers.
It looks for the delay times that result
You can view this data locally as a console stream, or run it as a systemd service in background.
BlockPerf also sends this data to the TopologyUpdater server, so that there is a possibility to compare this data (similar to sendtip to pooltool). As a contributing operator you get the possibility to see how your own relays compare to other nodes regarding receive quality, delay times and thus performance.
There is no connection or constraint between the TopologyUpdater Relay subscription and the BlockPerf analysis. BlockPerf is even designed to work outside the cnTools suite.
The results of these data are a good basis to make optimizations and to evaluate which changes were useful or might by required to improve the performance compared to other relay nodes.
"},{"location":"Scripts/blockperf/#installation","title":"Installation","text":"The script is best run as a background process. This can be accomplished in many ways but the preferred method is to run it as a systemd service. A terminal multiplexer like tmux or screen could also be used but not covered here.
"},{"location":"Scripts/blockperf/#run-as-service","title":"Run as service","text":"Use the deploy-as-systemd.sh
script to create a systemd unit file. In this setup the script is started in \"service\" mode. Error/Warn level log output is handled by syslog and end up in the systems standard syslog file, normally /var/log/syslog
. journalctl -f -u cnode-tu-blockperf.service
can be used to check service output (follow mode).
Outside the cnTools environment call blockPerf.sh -d
to install it as a systemd service.
If you run blockPerf local in the console (scripts/blockPerf.sh
) , immediately after the appearance of a new block it shows where it came from, how many slots away from the previous block it was, and how many milliseconds the individual steps took.
Block:.... 6860534\n Slot..... 52833850 (+59s)\n ......... 2022-02-09 09:49:01\n Header... 2022-02-09 09:49:02,780 (+1780 ms)\n Request.. 2022-02-09 09:49:02,780 (+0 ms)\n Block.... 2022-02-09 09:49:02,830 (+50 ms)\n Adopted.. 2022-02-09 09:49:02,900 (+70 ms)\n Size..... 79976 bytes\n delay.... 1.819971868 sec\n From..... 104.xxx.xxx.61:3001\n\nBlock:.... 6860535\n Slot..... 52833857 (+7s)\n ......... 2022-02-09 09:49:08\n Header... 2022-02-09 09:49:08,960 (+960 ms)\n Request.. 2022-02-09 09:49:08,970 (+10 ms)\n Block.... 2022-02-09 09:49:09,020 (+50 ms)\n Adopted.. 2022-02-09 09:49:09,090 (+70 ms)\n Size..... 64950 bytes\n delay.... 1.028341023 sec\n From..... 34.xxx.xxx.15:4001\n
"},{"location":"Scripts/blockperf/#collaborative-web-view","title":"Collaborative web view","text":"A further aim of the blockPerf project is to use the data that individual nodes send to the central TopologyUpdater database to produce graphical visualisations and evaluations that provide the participating node operators with useful insights into their performance compared to all others.
"},{"location":"Scripts/cncli/","title":"CNCLI","text":"Reminder !!
Ensure the Pre-Requisites are in place before you proceed.
cncli.sh
is a script to download and deploy CNCLI created and maintained by Andrew Westberg. It's a community-based CLI tool written in RUST for low-level cardano-node
communication. Usage is optional and no script is dependent on it. The main features include:
gLiveView
for peer analysis if available. sqlite
database. firstSlotOfNextEpoch - (3 * k / f)
).cncli.sh
script's main functions, sync
, leaderlog
, validate
and PoolTool sendslots
/sendtip
are not meant to be run manually, but instead deployed as systemd services that run in the background to do the block scraping and validation automatically. Additional commands exist for manual execution to initiate the sqlite
db, filling the blocklog DB with all blocks created by the pool known to the blockchain, migration of old cntoolsBlockCollector JSON blocklog, and re-validation of blocks and leaderlogs. See usage output below for a complete list of available commands.
The script works in tandem with Log Monitor to provide faster adopted status but mainly to catch slots the node is leader for but are unable to create a block for. These are marked as invalid. Blocklog will however work fine without the logMonitor
service and CNCLI
is able to handle everything except catching invalid blocks.
guild-deploy.sh
with guild-deploy.sh -s c
to download and install RUST and CNCLI. IOG fork of libsodium required by CNCLI is automatically compiled by CNCLI build process. If a previous installation is found, RUST and CNCLI will be updated to the latest version.deploy-as-systemd.sh
to deploy the systemd services that handle all the work in the background. Six systemd services in total are deployed whereof four are related to CNCLI. See above for the different purposes they serve.If you want to disable some of the deployed services, run sudo systemctl disable <service>
cnode.service
(main cardano-node
launcher)
cnode-cncli-sync.service
cnode-cncli-leaderlog.service
cnode-cncli-validate.service
cnode-cncli-ptsendtip.service
cnode-cncli-ptsendslots.service
cnode-logmonitor.service
(see Log Monitor)You can override the values in the script at the User Variables section shown below. POOL_ID, POOL_VRF_SKEY and POOL_VRF_VKEY should automatically be detected if POOL_NAME
is set in the common env
file and can be left commented. PT_API_KEY and POOL_TICKER need to be set in the script if PoolTool sendtip
/sendslots
are to be used before starting the services. For the rest of the commented values, if the defaults do not provide the right values, uncomment and make adjustments.
#POOL_ID=\"\" # Automatically detected if POOL_NAME is set in env. Required for leaderlog calculation & pooltool sendtip, lower-case hex pool id\n#POOL_VRF_SKEY=\"\" # Automatically detected if POOL_NAME is set in env. Required for leaderlog calculation, path to pool's vrf.skey file\n#POOL_VRF_VKEY=\"\" # Automatically detected if POOL_NAME is set in env. Required for block validation, path to pool's vrf.vkey file\n#PT_API_KEY=\"\" # POOLTOOL sendtip: set API key, e.g \"a47811d3-0008-4ecd-9f3e-9c22bdb7c82d\"\n#POOL_TICKER=\"\" # POOLTOOL sendtip: set the pools ticker, e.g. \"TCKR\"\n#PT_HOST=\"127.0.0.1\" # POOLTOOL sendtip: connect to a remote node, preferably block producer (default localhost)\n#PT_PORT=\"${CNODE_PORT}\" # POOLTOOL sendtip: port of node to connect to (default is CNODE_PORT from the env file)\n#CNCLI_DIR=\"${CNODE_HOME}/guild-db/cncli\" # path to the directory for cncli sqlite db\n#SLEEP_RATE=60 # CNCLI leaderlog/validate: time to wait until next check (in seconds)\n#CONFIRM_SLOT_CNT=600 # CNCLI validate: require at least these many slots to have passed before validating\n#CONFIRM_BLOCK_CNT=15 # CNCLI validate: require at least these many blocks on top of minted before validating\n#TIMEOUT_LEDGER_STATE=300 # CNCLI leaderlog: timeout in seconds for ledger-state query\n#BATCH_AUTO_UPDATE=N # Set to Y to automatically update the script if a new version is available without user interaction\n
"},{"location":"Scripts/cncli/#run","title":"Run","text":"Services are controlled by sudo systemctl <status|start|stop|restart> <service name>
All services are configured as child services to cnode.service
and as such, when an action is taken against this service it's replicated to all child services. E.g running sudo systemctl start cnode.service
will also start all child services.
Log output is handled by syslog and end up in the systems standard syslog file, normally /var/log/syslog
. journalctl -f -u <service>
can be used to check service output (follow mode). Other logging configurations are not covered here.
Recommended workflow to get started with CNCLI blocklog.
$CNODE_HOME/scripts/cncli.sh migrate <path>
where is the location to the directory containing all blocks_.json files. sudo systemctl start cnode-cncli-sync.service
(starts leaderlog
, validate
& ptsendslots
automatically)sudo systemctl start cnode-logmonitor.service
sudo systemctl start cnode-cncli-ptsendtip.service
(optional but recommended)sudo systemctl restart cnode.service
$CNODE_HOME/scripts/cncli.sh init
Usage: cncli.sh [operation <sub arg>]\nScript to run CNCLI, best launched through systemd deployed by 'deploy-as-systemd.sh'\n\nsync Start CNCLI chainsync process that connects to cardano-node to sync blocks stored in SQLite DB (deployed as service)\nleaderlog One-time leader schedule calculation for current epoch, then continuously monitors and calculates schedule for coming epochs, 1.5 days before epoch boundary on the mainnet (deployed as service)\n force Manually force leaderlog calculation and overwrite even if already done, exits after leaderlog is calculated\nvalidate Continuously monitor and confirm that the blocks made actually was accepted and adopted by chain (deployed as service)\n all One-time re-validation of all blocks in blocklog db\n epoch One-time re-validation of blocks in blocklog db for the specified epoch \nptsendtip Send node tip to PoolTool for network analysis and to show that your node is alive and well with a green badge (deployed as service)\nptsendslots Securely sends PoolTool the number of slots you have assigned for an epoch and validates the correctness of your past epochs (deployed as service)\ninit One-time initialization adding all minted and confirmed blocks to blocklog\nmigrate One-time migration from old blocklog (cntoolsBlockCollector) to new format (post cncli)\n path Path to the old cntoolsBlockCollector blocklog folder holding json files with blocks created\n
"},{"location":"Scripts/cncli/#view-blocklog","title":"View Blocklog","text":"Best and easiest viewed in CNTools and gLiveView
but the blocklog database is a SQLite DB so if you are comfortable with SQL, the sqlite3
command can be used to query the DB.
Block status
- Leader : Scheduled to make block at this slot\n- Ideal : Expected/Ideal number of blocks assigned based on active stake (sigma)\n- Luck : Leader slots assigned vs ideal slots for this epoch\n- Adopted : Block created successfully\n- Confirmed : Block created validated to be on-chain with the certainty set in `cncli.sh` for `CONFIRM_BLOCK_CNT`\n- Missed : Scheduled at slot but no record of it in CNCLI DB and no other pool has made a block for this slot\n- Ghosted : Block created but marked as orphaned and no other pool has made a valid block for this slot -> height battle or block propagation issue\n- Stolen : Another pool has a valid block registered on-chain for the same slot\n- Invalid : Pool failed to create block, base64 encoded error message can be decoded with `echo <base64 hash> | base64 -d | jq -r`\n
CNTools Open CNTools and select [b] Blocks
to open the block viewer. Either select Epoch
and enter the epoch you want to see a detailed view for or choose Summary
to display blocks for last x epochs.
If the node was elected to create blocks in the selected epoch it could look something like this:
Summary >> BLOCKS\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nCurrent epoch: 96\n\n+--------+---------------------------+----------------------+--------------------------------------+\n| Epoch | Leader | Ideal | Luck | Adopted | Confirmed | Missed | Ghosted | Stolen | Invalid |\n+--------+---------------------------+----------------------+--------------------------------------+\n| 96 | 34 | 31.66 | 107.39% | 18 | 18 | 0 | 0 | 0 | 0 |\n| 95 | 32 | 30.57 | 104.68% | 32 | 32 | 0 | 0 | 0 | 0 |\n+--------+---------------------------+----------------------+--------------------------------------+\n\n[h] Home | [b] Block View | [i] Info | [*] Refresh\n
Epoch >> BLOCKS\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nCurrent epoch: 96\n\n+---------------------------+----------------------+--------------------------------------+\n| Leader | Ideal | Luck | Adopted | Confirmed | Missed | Ghosted | Stolen | Invalid |\n+---------------------------+----------------------+--------------------------------------+\n| 34 | 31.66 | 107.39% | 18 | 18 | 0 | 0 | 0 | 0 |\n+---------------------------+----------------------+--------------------------------------+\n\n+-----+------------+----------+---------------------+--------------------------+-------+-------------------------------------------------------------------+\n| # | Status | Block | Slot | SlotInEpoch | Scheduled At | Size | Hash |\n+-----+------------+----------+---------------------+--------------------------+-------+-------------------------------------------------------------------+\n| 1 | confirmed | 2043444 | 11142827 | 40427 | 2020-11-16 08:34:03 CET | 3 | ec216d3fb01e4a3cc3e85305145a31875d9561fa3bbcc6d0ee8297236dbb4115 |\n| 2 | confirmed | 2044321 | 11165082 | 62682 | 2020-11-16 14:44:58 CET | 3 | b75c33a5bbe49a74e4b4cc5df4474398bfb10ed39531fc65ec2acc51f89ddce5 |\n| 3 | confirmed | 2044397 | 11166970 | 64570 | 2020-11-16 15:16:26 CET | 3 | c1ea37fd72543779b6dab46e3e29e0e422784b5fd6188f828ace9eabcc87088f |\n| 4 | confirmed | 2044879 | 11178909 | 76509 | 2020-11-16 18:35:25 CET | 3 | 35a116cec80c5dc295415e4fc8e6435c562b14a5d6833027006c988706c60307 |\n| 5 | confirmed | 2046965 | 11232557 | 130157 | 2020-11-17 09:29:33 CET | 3 | d566e5a1f6a3d78811acab4ae3bdcee6aa42717364f9afecd6cac5093559f466 |\n| 6 | confirmed | 2047101 | 11235675 | 133275 | 2020-11-17 10:21:31 CET | 3 | 3a638e01f70ea1c4a660fe4e6333272e6c61b11cf84dc8a5a107b414d1e057eb |\n| 7 | confirmed | 2047221 | 11238453 | 136053 | 2020-11-17 11:07:49 CET | 3 | 843336f132961b94276603707751cdb9a1c2528b97100819ce47bc317af0a2d6 |\n| 8 | confirmed | 2048692 | 11273507 | 171107 | 2020-11-17 20:52:03 CET | 3 | 9b3eb79fe07e8ebae163870c21ba30460e689b23768d2e5f8e7118c572c4df36 |\n| 9 | confirmed | 2049058 | 11282619 | 180219 | 2020-11-17 23:23:55 CET | 3 | 643396ea9a1a2b6c66bb83bdc589fa19c8ae728d1f1181aab82e8dfe508d430a |\n| 10 | confirmed | 2049321 | 11289237 | 186837 | 2020-11-18 01:14:13 CET | 3 | d93d305a955f40b2298247d44e4bc27fe9e3d1486ef3ef3e73b235b25247ccd7 |\n| 11 | confirmed | 2049747 | 11299205 | 196805 | 2020-11-18 04:00:21 CET | 3 | 19a43deb5014b14760c3e564b41027c5ee50e0a252abddbfcac90c8f56dc0245 |\n| 12 | confirmed | 2050415 | 11316075 | 213675 | 2020-11-18 08:41:31 CET | 3 | dd2cb47653f3bfb3ccc8ffe76906e07d96f1384bafd57a872ddbab3b352403e3 |\n| 13 | confirmed | 2050505 | 11318274 | 215874 | 2020-11-18 09:18:10 CET | 3 | deb834bc42360f8d39eefc5856bb6d7cabb6b04170c842dcbe7e9efdf9dbd2e1 |\n| 14 | confirmed | 2050613 | 11320754 | 218354 | 2020-11-18 09:59:30 CET | 3 | bf094f6fde8e8c29f568a253201e4b92b078e9a1cad60706285e236a91ec95ff |\n| 15 | confirmed | 2050807 | 11325239 | 222839 | 2020-11-18 11:14:15 CET | 3 | 21f904346ba0fd2bb41afaae7d35977cb929d1d9727887f541782576fc6a62c9 |\n| 16 | confirmed | 2050997 | 11330062 | 227662 | 2020-11-18 12:34:38 CET | 3 | 109799d686fe3cad13b156a2d446a544fde2bf5d0e8f157f688f1dc30f35e912 |\n| 17 | confirmed | 2051286 | 11336791 | 234391 | 2020-11-18 14:26:47 CET | 3 | bb1beca7a1d849059110e3d7dc49ecf07b47970af2294fe73555ddfefb9561a8 |\n| 18 | confirmed | 2051734 | 11348498 | 246098 | 2020-11-18 17:41:54 CET | 3 | 87940b53c2342999c1ba4e185038cda3d8382891a16878a865f5114f540683de |\n| 19 | leader | - | 11382001 | 279601 | 2020-11-19 03:00:17 CET | - | - |\n| 20 | leader | - | 11419959 | 317559 | 2020-11-19 13:32:55 CET | - | - |\n| 21 | leader | - | 11433174 | 330774 | 2020-11-19 17:13:10 CET | - | - |\n| 22 | leader | - | 11434241 | 331841 | 2020-11-19 17:30:57 CET | - | - |\n| 23 | leader | - | 11435289 | 332889 | 2020-11-19 17:48:25 CET | - | - |\n| 24 | leader | - | 11440314 | 337914 | 2020-11-19 19:12:10 CET | - | - |\n| 25 | leader | - | 11442361 | 339961 | 2020-11-19 19:46:17 CET | - | - |\n| 26 | leader | - | 11443861 | 341461 | 2020-11-19 20:11:17 CET | - | - |\n| 27 | leader | - | 11446997 | 344597 | 2020-11-19 21:03:33 CET | - | - |\n| 28 | leader | - | 11453110 | 350710 | 2020-11-19 22:45:26 CET | - | - |\n| 29 | leader | - | 11455323 | 352923 | 2020-11-19 23:22:19 CET | - | - |\n| 30 | leader | - | 11505987 | 403587 | 2020-11-20 13:26:43 CET | - | - |\n| 31 | leader | - | 11514983 | 412583 | 2020-11-20 15:56:39 CET | - | - |\n| 32 | leader | - | 11516010 | 413610 | 2020-11-20 16:13:46 CET | - | - |\n| 33 | leader | - | 11518958 | 416558 | 2020-11-20 17:02:54 CET | - | - |\n| 34 | leader | - | 11533254 | 430854 | 2020-11-20 21:01:10 CET | - | - |\n+-----+------------+----------+---------------------+--------------------------+-------+-------------------------------------------------------------------+\n
gLiveView Currently shows a block summary for current epoch. For full block details use CNTools for now. Invalid, missing, ghosted and stolen blocks only shown in case of a non-zero value.
\u2502--------------------------------------------------------------\u2502\n\u2502 BLOCKS Leader | Ideal | Luck | Adopted | Confirmed \u2502\n\u2502 24 27.42 87.53% 1 1 \u2502\n\u2502 08:07:57 until leader XXXXXXXXX.....................\u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n
"},{"location":"Scripts/cntools-changelog/","title":"Changelog","text":"All notable changes to this tool will be documented in this file.
Whenever you're updating between versions where format/hash of keys have changed , or you're changing networks - it is recommended to Backup your Wallet and Pool folders before you proceed with launching cntools on a fresh network.
The format is based on Keep a Changelog, and this adheres to Semantic Versioning.
"},{"location":"Scripts/cntools-changelog/#1102-2023-10-30","title":"[11.0.2] - 2023-10-30","text":""},{"location":"Scripts/cntools-changelog/#fixed","title":"Fixed","text":"test_koios
call from cntools.library to cntools.shdialog
by default, it is an optional component - and no longer installed by default.--whole-utxo
flag, as it returns all address and will not accept --address
--whole-utxo
flag when query UTxO, as required by cardano-cli 1.28, to keep behaviour same as before.Advanced
Though mostly unchanged in the user interface, this is a major update with most of the code re-written/touched in the back-end. Only the most noticeable changes added to changelog.
"},{"location":"Scripts/cntools-changelog/#added_10","title":"Added","text":"--cold-verification-key-file
instead of --verification-key-file
This is a major release with a lot of changes. It is highly recommended that you familiarise yourself with the usage for Hybrid or Online v/s Offline mode on a testnet environment before doing it on production. Please visit https://cardano-community.github.io/guild-operators/upgrade for details.
"},{"location":"Scripts/cntools-changelog/#added_13","title":"Added","text":"cardano-address
and bech32
in yout $PATH to use this feature (available if you rebuild cardano-node
using updated cabal-build-all.sh
), reusing guide from @ilap.srm
) when available when deleting files.,
) in user input for sending ADA and pledge/cost at pool registration to make it easier to count the zeroscardano-node 1.19.0
, please upgrade if you're not using this version.Pool >> Show
now moved to its own menu option This is to de-clutter and because it takes time to parse this data from ledger-statePool >> Delegators
removed.pool >> show
stake distribution showing up as always 0.prereqs.sh -t
) fix for internal update--output-format hex
when extracting pool ID in hex formatWallet >> Encrypt
as these are re-generated from keys and need to be writableFunds >> Withdraw
for base address as this is used to pay the withdraw transaction feePool >> Show
delegator rewards parsing from ledger-statemainnet_candidate
, and add second argument (g) to run prereqs against guild network[c]
to [Esc]
Wallet >> Show
2.1.1
included a change to env file and thus require a major version bump.Pool >> Show
Pool >> Show
(stake + reward)
is below pledge (single-owner only for now)Pool >> Show
Pool >> New
to Pool >> Register
.Wallet >> List
Not a registered wallet on chain
information from Wallet listingPool >> Show
Important
Familiarize yourself with the Online workflow of creating wallets and pools on the Preview/Preprod/Guild network first. You can then move on to test the Offline Workflow. The Offline workflow means that the private keys never touch the Online node. When comfortable with both the online and offline CNTools workflow, it's time to deploy what you learnt on the mainnet.
This chapter describes some common use-cases for wallet and pool creation when running CNTools in Online mode. CNTools contains much more functionality not described here.
Create WalletA wallet is needed for pledge and to pay for pool registration fee.
[w] Wallet
and you will be presented with the following menu: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> WALLET\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Wallet Management\n\n ) New - create a new wallet\n ) Import - import a Daedalus/Yoroi 24/25 mnemonic or Ledger/Trezor HW wallet\n ) Register - register a wallet on chain\n ) De-Register - De-Register (retire) a registered wallet\n ) List - list all available wallets in a compact view\n ) Show - show detailed view of a specific wallet\n ) Remove - remove a wallet\n ) Decrypt - remove write protection and decrypt wallet\n ) Encrypt - encrypt wallet keys and make all files immutable\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Select Wallet Operation\n\n [n] New\n [i] Import\n [r] Register\n [z] De-Register\n [l] List\n [s] Show\n [x] Remove\n [d] Decrypt\n [e] Encrypt\n [h] Home\n
[n] New
to create a new wallet. [i] Import
can also be used to import a Daedalus/Yoroi based 15 or 24 word wallet seed ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> WALLET >> NEW\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nName of new wallet: Test\n\nNew Wallet : Test\nAddress : addr_test1qpq5qjr774cyc6kxcwp060k4t4hwp42q43v35lmcg3gcycu5uwdwld5yr8m8fgn7su955zf5qahtrgljqfjfa4nr8jfsj4alxk\nEnterprise Address : addr_test1vpq5qjr774cyc6kxcwp060k4t4hwp42q43v35lmcg3gcyccuxhdka\n\nYou can now send and receive Ada using the above addresses.\nNote that Enterprise Address will not take part in staking.\nWallet will be automatically registered on chain if you\nchoose to delegate or pledge wallet when registering a stake pool.\n
The Import
feature of CNTools is originally based on this guide from Ilap.
If you would like to use Import
function to import a Daedalus/Yoroi based 15 or 24 word wallet seed, please ensure that cardano-address
and bech32
bineries are available in your $PATH
environment variable:
bech32 --version\n1.1.0\n\ncardano-address --version\n3.5.0\n
If the version is not as per above, please run the latest guild-deploy.sh
from here and rebuild cardano-node
as instructed here.
To import a Daedalus/Yoroi wallet to CNTools, open CNTools and select the [w] Wallet
option, and then select the [i] Import
, the following menu will appear:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> WALLET >> IMPORT\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Wallet Import\n\n ) Mnemonic - Daedalus/Yoroi 24 or 25 word mnemonic\n ) HW Wallet - Ledger/Trezor hardware wallet\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Select Wallet operation\n\n [m] Mnemonic\n [w] HW Wallet\n [h] Home\n
Note
You can import Hardware wallet using [w] HW Wallet
above, but please note that before you are able to use hardware wallet in CNTools, you need to ensure you can detect your hardware device at OS level using cardano-hw-cli
Select the wallet you want to import, for Daedalus / Yoroi wallets select [m] Mnemonic
:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> WALLET >> IMPORT >> MNEMONIC\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nName of imported wallet: TEST\n\n24 or 15 word mnemonic(space separated):\n
Give your wallet a name (in this case 'TEST'), and enter your mnemonic phrase. Please ensure that you **READ* through the complete notes presented by CNTools before proceeding. Create Pool Create the necessary pool keys.
[p] Pool
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> POOL\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Pool Management\n\n ) New - create a new pool\n ) Register - register created pool on chain using a stake wallet (pledge wallet)\n ) Modify - change pool parameters and register updated pool values on chain\n ) Retire - de-register stake pool from chain in specified epoch\n ) List - a compact list view of available local pools\n ) Show - detailed view of specified pool\n ) Rotate - rotate pool KES keys\n ) Decrypt - remove write protection and decrypt pool\n ) Encrypt - encrypt pool cold keys and make all files immutable\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Select Pool Operation\n\n [n] New\n [r] Register\n [m] Modify\n [x] Retire\n [l] List\n [s] Show\n [o] Rotate\n [d] Decrypt\n [e] Encrypt\n [h] Home\n
[n] New
to create a new pool ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> POOL >> NEW\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nPool Name: TEST\n\nPool: TEST\nID (hex) : 8d5a3510f18ce241115da38a1b2419ed82d308599c16e98caea1b4c0\nID (bech32) : pool134dr2y833n3yzy2a5w9pkfqeakpdxzzenstwnr9w5x6vqtnclue\n
Register the pool on-chain.
[p] Pool
[r] Register
Make sure you set your pledge low enough to insure your funds in your wallet will cover pledge plus pool registration fees.
Test
in our case. As this is a newly created wallet, you will be prompted to continue with wallet registration. When complete and if successful, both wallet and pool will be registered on-chain.It will look something like this:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> POOL >> REGISTER\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nOnline mode - The default mode to use if all keys are available\n\nHybrid mode - 1) Go through the steps to build a transaction file\n 2) Copy the built tx file to an offline node\n 3) Sign it using 'Sign Tx' with keys on offline node\n (CNTools started in offline mode '-o' without node connection)\n 4) Copy the signed tx file back to the online node and submit using 'Submit Tx'\n\nSelected value: [o] Online\n\n# Select pool\nSelected pool: TEST\n\n# Pool Parameters\npress enter to use default value\n\nPledge (in Ada, default: 50,000):\nMargin (in %, default: 7.5):\nCost (in Ada, minimum: 340, default: 340):\n\n# Pool Metadata\n\nEnter Pool's JSON URL to host metadata file - URL length should be less than 64 chars (default: https://foo.bat/poolmeta.json):\n\nEnter Pool's Name (default: TEST):\nEnter Pool's Ticker , should be between 3-5 characters (default: TEST):\nEnter Pool's Description (default: No Description):\nEnter Pool's Homepage (default: https://foo.com):\n\nOptionally set an extended metadata URL?\nSelected value: [n] No\n{\n \"name\": \"TEST\",\n \"ticker\": \"TEST\",\n \"description\": \"No Description\",\n \"homepage\": \"https://foo.com\",\n \"nonce\": \"1613146429\"\n}\n\nPlease host file /opt/cardano/guild/priv/pool/TEST/poolmeta.json as-is at https://foo.bat/poolmeta.json\n\n# Pool Relay Registration\nSelected value: [d] A or AAAA DNS record (single)\nEnter relays's DNS record, only A or AAAA DNS records: relay.foo.com\nEnter relays's port: 6000\nAdd more relay entries?\nSelected value: [n] No\n\n# Select main owner/pledge wallet (normal CLI wallet)\nSelected wallet: Test (100,000.000000 Ada)\nWallet Test3 not registered on chain\n\nWaiting for new block to be created (timeout = 600 slots, 600s)\nINFO: press any key to cancel and return (won't stop transaction)\n\nOwner #1 : Test added!\n\nRegister a multi-owner pool (you need to have stake.vkey of any additional owner in a seperate wallet folder under $CNODE_HOME/priv/wallet)?\nSelected value: [n] No\n\nUse a separate rewards wallet from main owner?\nSelected value: [n] No\n\nWaiting for new block to be created (timeout = 600 slots, 600s)\nINFO: press any key to cancel and return (won't stop transaction)\n\nPool TEST successfully registered!\nOwner #1 : Test\nReward Wallet : Test\nPledge : 50,000 Ada\nMargin : 7.5 %\nCost : 340 Ada\n\nUncomment and set value for POOL_NAME in ./env with 'TEST'\n\nINFO: Total balance in 1 owner/pledge wallet(s) are: 99,497.996518 Ada\n
POOL_NAME
in ./env
with 'TEST' (in our case, the POOL_NAME
is TEST
). The cnode.sh
script will automatically detect whether the files required to run as a block producing node are present in the $CNODE_HOME/priv/pool/<POOL_NAME>
directory.The node runs with an operational certificate, generated using the KES hot key. For security reasons, the protocol asks to re-generate (or rotate) your KES key once reaching expiry. On mainnet, this expiry is in 62 cycles of 18 hours (thus, to ask for rotation quarterly), after which your node will not be able to forge valid blocks unless rotated. To be able to rotate KES keys, your cold keys files (cold.skey
, cold.vkey
and cold.counter
) need to be present on the machine where you run CNTools to rotate your KES key.
To Rotate KES keys and generate the operational certificate - op.cert
.
From the main menu select [p] Pool
[o] Rotate
The output should look like:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> POOL >> ROTATE KES\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nSelect pool to rotate KES keys on\nSelected pool: TEST\n\nPool KES keys successfully updated\nNew KES start period : 240\nKES keys will expire : 302 - 2021-09-04 11:24:31 UTC\n\nRestart your pool node for changes to take effect\n\npress any key to return to home menu\n
cardano-node
. If deployed as a systemd
service as shown here, you can run sudo systemctl restart cnode
.You can use gLiveView - the output at the top should say > Cardano Node - (Core - Guild)
.
Alternatively, you can check the node logs in $CNODE_HOME/logs/
to see whether the node is performing leadership checks (TraceStartLeadershipCheck
, TraceNodeIsNotLeader
, etc.)
Important
Koios CNTools is like a swiss army knife for pool operators to simplify typical operations regarding their wallet keys and pool management. Please note that this tool only aims to simplify usual tasks for its users, but it should NOT act as an excuse to skip understanding how to manually work through things or basics of Linux operations. The skills highlighted on the home page are paramount for a stake pool operator, and so is the understanding of configuration files and network. Please ensure you've read and understood the disclaimers before proceeding.
Visit the Changelog section to see progress and current release.
"},{"location":"Scripts/cntools/#overview","title":"Overview","text":"The tool consist of three files.
cntools.sh
- the main script to launch cntools.cntools.library
- internal script with helper functions.In addition to the above files, there is also a dependency on the common env
file. CNTools connects to your node through the configuration in the env
file located in the same directory as the script. Customize env
and cntools.sh
files to your needs.
Additionally, CNTools can integrate and enable optional functionalities based on external components:
cncli.sh
is a companion script with optional functionalities to run on the core node (block producer) such as monitoring created blocks, calculating leader schedules and block validation.logMonitor.sh
is another companion script meant to be run together with the cncli.sh
script to give a more complete picture.See CNCLI and Log Monitor sections for more details.
Koios CNTools can operate in following modes:
-a
runtime argument, this launches CNTools exposing a new Advanced
menu, which allows users to manage (create/mint/burn) new assets.-o
runtime argument, this launches CNTools with limited set of features. This mode does not require access to cardano-node. It is mainly used to create Wallet/Pool and access Transaction >> Sign
to sign an offline transaction file created in Hybrid mode.The update functionality is provided from within CNTools. In case of breaking changes, please follow the prompts post-upgrade. If stuck, it's always best to re-run the latest guild-deploy.sh
before proceeding.
If you have not updated in a while, it is possible that you might come from a release with breaking changes. If so, please be sure to check out the upgrade instructions.
"},{"location":"Scripts/cntools/#navigation","title":"Navigation","text":"The scripts menu supports both arrow key navigation and shortcut key selection. The character within the square brackets is the shortcut to press for quick navigation. For other selections like wallet and pool menu that don't contain shortcuts, there is a third way to navigate. Key pressed is compared to the first character of the menu option and if there is a match the selection jumps to this location. A handy way to quickly navigate a large menu.
"},{"location":"Scripts/cntools/#hardware-wallet","title":"Hardware Wallet","text":"CNTools includes hardware wallet support since version 7.0.0
through Vacuumlabs cardano-hw-cli
application. Initialize and update firmware/app on the device to the latest version before usage following the manufacturer instructions.
To enable hardware support run guild-deploy.sh -s w
. This downloads and installs Vacuumlabs cardano-hw-cli
including udev
configuration. When a new version of Vacuumlabs cardano-hw-cli
is released, run guild-deploy.sh -s w
again to update. For additional runtime options, run guild-deploy.sh -h
.
Trezor Bridge
for your system before trying to use your Trezor device in CNTools. You can find the latest version of the bridge at https://wallet.trezor.io/#/bridgeCNTools can be run in online and offline mode. At a very high level, for working with offline devices, remember that you need to use CNTools in an online node to generate a staging transaction for the desired type of transaction, and then move the staging transaction to an offline node to sign (authorize) using the signing keys on your offline node - and then bring back the signed transaction to the online node for submission to the chain.
For the offline workflow, all the wallet and pool keys should be kept on the offline node. The backup function in CNTools has an option to create a backup without private keys (sensitive signing keys) to be transferred to online node. All other files are included in the backup to be transferred to the online node.
Keys excluded from backup when created without private keys: Wallet - payment.skey
, stake.skey
Pool - cold.skey
Note that setting up an offline server requires good SysOps background (you need to be aware of how to set up your server with offline mirror repository, how to transfer files across and be fairly familiar with the disk layout presented in the documentation). The guild-deploy.sh
in its current state is not expected to run on an offline machine. Essentially, you simply need the cardano-cli
, bech32
, cardano-address
binaries in your $PATH
, OS level dependency packages [jq
, coreutils
, pkgconfig
, gcc-c++
and bc
], and perhaps a copy from your online cnode
directory (to ensure you have the right genesis
/config
files on your offline server). We strongly recommend you to familiarise yourself with the workflow on the preview / preprod / guild networks first, before attempting on mainnet.
Example workflow for creating a wallet and pool:
sequenceDiagram Note over Offline: Create/Import a wallet Note over Offline: Create a new pool Note over Offline: Rotate KES keys to generate op.cert Note over Offline: Create a backup w/o private keys Offline->>Online: Transfer backup to online node Note over Online: Fund the wallet base address with enough Ada Note over Online: Register wallet using ' Wallet \u00bb Register ' in hybrid mode Online->>Offline: Transfer built tx file back to offline node Note over Offline: Use ' Transaction >> Sign ' with payment.skey from wallet to sign transaction Offline->>Online: Transfer signed tx back to online node Note over Online: Use ' Transaction >> Submit ' to send signed transaction to blockchain Note over Online: Register pool in hybrid mode loop Offline-->Online: Repeat steps to sign and submit built pool registration transaction end Note over Online: Verify that pool was successfully registered with ' Pool \u00bb Show ' Online modeTo start CNTools in Online (advanced) Mode, execute the script from the $CNODE_HOME/scripts/
directory:
cd $CNODE_HOME/scripts\n./cntools.sh -a\n
You should get a screen that looks something like this:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> Koios CNTools vX.X.X - Guild - CONNECTED <<\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Main Menu Telegram Announcement / Support channel: t.me/CardanoKoios/9759\n\n ) Wallet - create, show, remove and protect wallets\n ) Funds - send, withdraw and delegate\n ) Pool - pool creation and management\n ) Transaction - Sign and Submit a cold transaction (hybrid/offline mode)\n ) Blocks - show core node leader schedule & block production statistics\n ) Backup - backup & restore of wallet/pool/config\n ) Advanced - Developer and advanced features: metadata, multi-assets, ...\n ) Refresh - reload home screen content\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Epoch 276 - 3d 19:08:27 until next\n What would you like to do? Node Sync: 12 :)\n\n [w] Wallet\n [f] Funds\n [p] Pool\n [t] Transaction\n [b] Blocks\n [u] Update\n [z] Backup & Restore\n [a] Advanced\n [r] Refresh\n [q] Quit\n
Offline mode To start CNTools in Offline Mode, execute the script from the $CNODE_HOME/scripts/
directory using the -o
flag:
cd $CNODE_HOME/scripts\n./cntools.sh -o\n
The main menu header should let you know that node is started in offline mode:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> Koios CNTools vX.X.X - Guild - OFFLINE <<\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Main Menu Telegram Announcement / Support channel: t.me/CardanoKoios/9759\n\n ) Wallet - create, show, remove and protect wallets\n ) Funds - send, withdraw and delegate\n ) Pool - pool creation and management\n ) Transaction - Sign and Submit a cold transaction (hybrid/offline mode)\n\n ) Backup - backup & restore of wallet/pool/config\n\n ) Refresh - reload home screen content\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Epoch 276 - 3d 19:03:46 until next\n What would you like to do?\n\n [w] Wallet\n [f] Funds\n [p] Pool\n [t] Transaction\n [z] Backup & Restore\n [r] Refresh\n [q] Quit\n
"},{"location":"Scripts/env/","title":"Common env","text":"A common environment file called env
is sourced by most scripts in the Guild Operators repository. This file holds common variables and functions needed by other scripts. There are several benefits to this, not having to specify duplicate settings and being able to reuse functions decreasing the risk of misconfiguration and inconsistency.
env
file is downloaded together with the rest of the scripts when Pre-Requisites if followed and located in the $CNODE_HOME/scripts/
directory. The file is also automatically downloaded/updated by some of the individual scripts if missing, like cntools.sh
, gLiveView.sh
and topologyUpdater.sh
. All custom changes in User Variables section are untouched on updates unless a forced overwrite is selected when running guild-deploy.sh
.
Most variables can be left commented to use the automatically detected or default value. But there are some that need to be set as explained below.
CNODE_PORT
- This is the most important variable and needs to be set. Used when launching the node through cnode.sh
and to identify the correct process of the node.CNODE_HOME
- The root directory of the Cardano node holding all the files needed. Can be left commented if guild-deploy.sh
has been run as this variable is then exported and added as a system environment variable.POOL_NAME
- If the node is to be started as a block producer by cnode.sh
this variable needs to be uncommented and set. This is the name given to the pool in CNTools (not ticker), i.e. the pool directory name under $CNODE_HOME/priv/pool/<POOL_NAME>
Take your time and look through the different variables and their explanations and decide if you need/want to change the default setting. For a default deployment using guild-deploy.sh
, the CNODE_PORT
(all installs) and POOL_NAME
(only block producer) should be the only variables needed to be set.
######################################\n# User Variables - Change as desired #\n# Leave as is if unsure #\n######################################\n\n#CCLI=\"${HOME}/.local/bin/cardano-cli\" # Override automatic detection of path to cardano-cli executable\n#CNCLI=\"${HOME}/.local/bin/cncli\" # Override automatic detection of path to cncli executable (https://github.com/AndrewWestberg/cncli)\n#CNODE_HOME=\"/opt/cardano/cnode\" # Override default CNODE_HOME path (defaults to /opt/cardano/cnode)\nCNODE_PORT=6000 # Set node port\n#CONFIG=\"${CNODE_HOME}/files/config.json\" # Override automatic detection of node config path\n#SOCKET=\"${CNODE_HOME}/sockets/node0.socket\" # Override automatic detection of path to socket\n#TOPOLOGY=\"${CNODE_HOME}/files/topology.json\" # Override default topology.json path\n#LOG_DIR=\"${CNODE_HOME}/logs\" # Folder where your logs will be sent to (must pre-exist)\n#DB_DIR=\"${CNODE_HOME}/db\" # Folder to store the cardano-node blockchain db\n#UPDATE_CHECK=\"Y\" # Check for updates to scripts, it will still be prompted before proceeding (Y|N).\n#TMP_DIR=\"/tmp/cnode\" # Folder to hold temporary files in the various scripts, each script might create additional subfolders\n#EKG_HOST=127.0.0.1 # Set node EKG host IP\n#EKG_PORT=12788 # Override automatic detection of node EKG port\n#PROM_HOST=127.0.0.1 # Set node Prometheus host IP\n#PROM_PORT=12798 # Override automatic detection of node Prometheus port\n#EKG_TIMEOUT=3 # Maximum time in seconds that you allow EKG request to take before aborting (node metrics)\n#CURL_TIMEOUT=10 # Maximum time in seconds that you allow curl file download to take before aborting (GitHub update process)\n#BLOCKLOG_DIR=\"${CNODE_HOME}/guild-db/blocklog\" # Override default directory used to store block data for core node\n#BLOCKLOG_TZ=\"UTC\" # TimeZone to use when displaying blocklog - https://en.wikipedia.org/wiki/List_of_tz_database_time_zones\n#SHELLEY_TRANS_EPOCH=208 # Override automatic detection of shelley epoch start, e.g 208 for mainnet\n#TG_BOT_TOKEN=\"\" # Uncomment and set to enable telegramSend function. To create your own BOT-token and Chat-Id follow guide at:\n#TG_CHAT_ID=\"\" # https://cardano-community.github.io/guild-operators/Scripts/sendalerts\n#USE_EKG=\"N\" # Use EKG metrics from the node instead of Promethus. Promethus metrics(default) should yield slightly better performance\n#TIMEOUT_LEDGER_STATE=300 # Timeout in seconds for querying and dumping ledger-state\n#IP_VERSION=4 # The IP version to use for push and fetch, valid options: 4 | 6 | mix (Default: 4)\n\n#WALLET_FOLDER=\"${CNODE_HOME}/priv/wallet\" # Root folder for Wallets\n#POOL_FOLDER=\"${CNODE_HOME}/priv/pool\" # Root folder for Pools\n# Each wallet and pool has a friendly name and subfolder containing all related keys, certificates, ...\n#POOL_NAME=\"\" # Set the pool's name to run node as a core node (the name, NOT the ticker, ie folder name)\n\n#WALLET_PAY_VK_FILENAME=\"payment.vkey\" # Standardized names for all wallet related files\n#WALLET_PAY_SK_FILENAME=\"payment.skey\"\n#WALLET_HW_PAY_SK_FILENAME=\"payment.hwsfile\"\n#WALLET_PAY_ADDR_FILENAME=\"payment.addr\"\n#WALLET_BASE_ADDR_FILENAME=\"base.addr\"\n#WALLET_STAKE_VK_FILENAME=\"stake.vkey\"\n#WALLET_STAKE_SK_FILENAME=\"stake.skey\"\n#WALLET_HW_STAKE_SK_FILENAME=\"stake.hwsfile\"\n#WALLET_STAKE_ADDR_FILENAME=\"reward.addr\"\n#WALLET_STAKE_CERT_FILENAME=\"stake.cert\"\n#WALLET_STAKE_DEREG_FILENAME=\"stake.dereg\"\n#WALLET_DELEGCERT_FILENAME=\"delegation.cert\"\n\n#POOL_ID_FILENAME=\"pool.id\" # Standardized names for all pool related files\n#POOL_HOTKEY_VK_FILENAME=\"hot.vkey\"\n#POOL_HOTKEY_SK_FILENAME=\"hot.skey\"\n#POOL_COLDKEY_VK_FILENAME=\"cold.vkey\"\n#POOL_COLDKEY_SK_FILENAME=\"cold.skey\"\n#POOL_OPCERT_COUNTER_FILENAME=\"cold.counter\"\n#POOL_OPCERT_FILENAME=\"op.cert\"\n#POOL_VRF_VK_FILENAME=\"vrf.vkey\"\n#POOL_VRF_SK_FILENAME=\"vrf.skey\"\n#POOL_CONFIG_FILENAME=\"pool.config\"\n#POOL_REGCERT_FILENAME=\"pool.cert\"\n#POOL_CURRENT_KES_START=\"kes.start\"\n#POOL_DEREGCERT_FILENAME=\"pool.dereg\"\n\n#ASSET_FOLDER=\"${CNODE_HOME}/priv/asset\" # Root folder for Multi-Assets containing minted assets and subfolders for Policy IDs\n#ASSET_POLICY_VK_FILENAME=\"policy.vkey\" # Standardized names for all multi-asset related files\n#ASSET_POLICY_SK_FILENAME=\"policy.skey\"\n#ASSET_POLICY_SCRIPT_FILENAME=\"policy.script\" # File extension '.script' mandatory\n#ASSET_POLICY_ID_FILENAME=\"policy.id\"\n
"},{"location":"Scripts/gliveview/","title":"gLiveView","text":"Reminder !!
Ensure the Pre-Requisites are in place before you proceed.
Koios gLiveView is a local monitoring tool to use in addition to remote monitoring tools like Prometheus/Grafana, Zabbix or IOG's RTView. This is especially useful when moving to a systemd deployment - if you haven't done so already - as it offers an intuitive UI to monitor the node status.
The tool is independent from other files and can run as a standalone utility that can be stopped/started without affecting the status of cardano-node
.
If you've used guild-deploy.sh, you can skip this part, as this is already set up for you. The tool relies on the common env
configuration file. To get current epoch blocks, the logMonitor.sh script is needed (and can be combined with CNCLI). This is optional and Koios gLiveView will function without it.
Note
For those who follow the folder structure in this repo and do not wish to run guild-deploy.sh
, you can run the below in $CNODE_HOME/scripts
folder
To download the script:
curl -s -o gLiveView.sh https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/gLiveView.sh\ncurl -s -o env https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/env\nchmod 755 gLiveView.sh\n
"},{"location":"Scripts/gliveview/#configuration-startup","title":"Configuration & Startup","text":"For most setups, it's enough to set CNODE_PORT
in the env
file. The rest of the variables should automatically be detected. If required, modify User Variables in env
and gLiveView.sh
to suit your environment (if folder structure you use is different). This should lead you to a stage where you can now start running ./gLiveView.sh
in the folder you downloaded the script (the default location would be $CNODE_HOME/scripts
). Note that the script is smart enough to automatically detect when you're running as a Core or Relay and will show fields accordingly.
The tool can be run in legacy mode with only standard ASCII characters for terminals with trouble displaying the box-drawing characters. Run ./gLiveView.sh -h
to show available command-line parameters or permanently set it directly in script.
A sample output from both core and relay together with peer analysis:
Core Relay Peer Analysis "},{"location":"Scripts/gliveview/#upper-main-section","title":"Upper main section","text":"Displays live metrics from cardano-node gathered through the nodes EKG/Prometheus(env setting) endpoint.
activeSlotsCoeff
). A slot on MainNet happens every 1 second(slotLength
), thus the max chain density can be calculated as slotLength/activeSlotsCoeff = 5%
. Normally, the value should fluctuate around this value. starting|sync xx.x%
or if close to reference tip, the tip difference Tip (ref) - Tip (node)
to see how far of the tip (diff value) the node is. With current parameters a slot diff up to 40 from reference tip is considered good but it should usually stay below 30. It's perfectly normal to see big differences in slots between blocks. It's the built in randomness at play. To see if a node is really healthy and staying on tip you would need to compare the tip between multiple nodes. Cold
peers indicate the number of inactive but known peers to the node.Warm
peers tell how many established connections the node has.Hot
peers how many established connections are actually active.Bi-Dir
(bidirectional) and Uni-Dir
(unidirectional) indicate how the handshake protocol negotiated the connection. The connection between p2p nodes will always be bidirectional, but it will be unidirectional between p2p nodes and non-p2p nodes. Duplex
shows the connections that are actually used in both directions, only bidirectional connections have this potential.If the node is run as a core, identified by the 'forge-about-to-lead' parameter, a second core section is displayed.
Missed slot checks - A value that show if the node have missed slots for attempting leadership checks (as absolute value and percentage since node startup). !!! info \"Missed Slot Leadership Check\"
Note that while this counter should ideally be close to zero, you would often see a higher value if the node is busy (e.g. paused for garbage collection or busy with reward calculations). A consistently high percentage of missed slots would need further investigation (assistance for troubleshooting can be seeked here ), as in extremely remote cases - it can overlap with a slot that your node could be a leader for.
Blocks - If CNCLI is activated to store blocks created in a blocklog DB, data from this blocklog is displayed. See linked CNCLI documentation for details regarding the different block metrics. If CNCLI is not deployed, block metrics displayed are taken from node metrics and show blocks created by the node since node start.
A manual peer analysis can be triggered by key press p
. A latency test will be done on incoming and outgoing connections to the node.
Note
Note that with P2P enabled, an incoming/outgoing connection can be reused for bi-directional traffic. There isnt a way to distinctly identify the P2P peer's direction yet for a given IP.
Outgoing connections(peers in topology file), ping type used is done in this order: 1. cncli - If available, this gives the most accurate measure as it checks the entire handshake process against the remote peer. 2. ss - Sends a TCP SYN package to ping the remote peer on the cardano-node
port. Should give ~100% success rate. 2. tcptraceroute - Same as ss. 3. ping - fallback method using ICMP ping against IP. Will only work if firewall of remote peer accept ICMP traffic.
For incoming connections, only ICMP ping is used as remote peer port is unknown. It's not uncommon to see many undetermined peers for incoming connections as it's a good security practice to disable ICMP in firewall.
Once the analysis is finished, it will display the RTTs (return-trip times) for the peers and group them in ranges 0-50, 50-100, 100-200, 200<. The analysis is NOT live. Press [h] Home
to go back to default view or [i] Info
to show in-script help text. Up
and Down
arrow keys is used to select incoming or outgoing detailed list of IPs and their RTT value. Left (<)
and Right (>)
arrow keys can be used to navigate the pages in the selected list.
In case you run into trouble while running the script, you might want to edit env
& gLiveView.sh
and look at User Variables section. You can override the values if the automatic detection do not provide the right information, but we would appreciate if you could also notify us by raising an issue against the GitHub repository:
gLiveView.sh
######################################\n# User Variables - Change as desired #\n######################################\n\nNODE_NAME=\"Cardano Node\" # Change your node's name prefix here, keep at or below 19 characters!\nREFRESH_RATE=2 # How often (in seconds) to refresh the view (additional time for processing and output may slow it down)\nLEGACY_MODE=false # (true|false) If enabled unicode box-drawing characters will be replaced by standard ASCII characters\nRETRIES=3 # How many attempts to connect to running Cardano node before erroring out and quitting\nPEER_LIST_CNT=6 # Number of peers to show on each in/out page in peer analysis view\nTHEME=\"dark\" # dark = suited for terminals with a dark background\n# light = suited for terminals with a bright background\nENABLE_IP_GEOLOCATION=\"Y\" # Enable IP geolocation on outgoing and incoming connections using ip-api.com\n
"},{"location":"Scripts/itnrewards/","title":"Itnrewards","text":""},{"location":"Scripts/itnrewards/#concept","title":"Concept","text":"To claim rewards earned during the Incentivized TestNet the private and public keys from ITN must be converted to Shelley stake keys. A script called itnRewards.sh
has been created to guide you through the process of converting the keys and to create a CNTools compatible wallet from were the rewards can be withdrawn.
jcli
account in ITN was ed25519_sk (not extended), you can run the itnRewards.sh
script providing the name for the CNTools wallet and ITN owner public/secret keys that were used to register your pool as below. cd $CNODE_HOME/scripts\n./itnRewards.sh MyITNWallet ~/jormu/account/priv/owner.sk ~/jormu/account/priv/owner.pk\n
FUNDS >> WITHDRAW
to move rewards to the base address of walletDisclaimer
Currently this is to protect the existing pools from the ITN who already have a delegator base against spoofing - to avoid scammers building on results of ITN from known pools. There would be a solution in the future for Mainnet nodes too - but it doesn't apply to those in its current form.
"},{"location":"Scripts/itnwitness/#concept","title":"Concept","text":"Due to the expected ticker spoofing attack for pools that were famous during ITN, some of the community members have proposed an interim solution to verify the legitimacy of a pool for delegators. You can check the high-level workflow below:
graph TB A(\"ITN Owner skey (ed25519/ed25519e) ..\") --x C([\"jcli key sign ..\"]) B(\"Haskell Pool ID (pool.id) ..\") --x C C --x D(\"Signature key, (pool.sig) ..\") E(\"ITN Owner vkey (ed25519_pk) ..\") --x F(\"Extended Metadata JSON (poolmeta_extended.json) ..\") D --x F F --x G(\"Pool Meta JSON (poolmeta.json) ..\") ;"},{"location":"Scripts/itnwitness/#steps","title":"Steps","text":"The actual implementation is pretty straightforward, we will keep it brisk - as we assume ones participating are fairly familiar with jcli
usage.
mainnet_pool.id
)owner_skey
) as per below: jcli key sign --secret-key ~/jormu/account/priv/owner.sk $CNODE_HOME/priv/pool/TEST/pool.id --output mainnet_pool.sig\ncat mainnet_pool.sig\n# ed25519_sig1sn32v3z...d72rg7rc6gs\n
{\n\"itn\": {\n\"owner\": \"ed25519_pk1...\",\n\"witness\": \"ed25519_sig1...\"\n}\n}\n
If the process is approved to appear for wallets, we may consider providing easier alternatives. If any queries about the process, or any additions please create a git issue/PR against guild repository - to capture common queries and update instructions/help text where appropriate.
"},{"location":"Scripts/itnwitness/#sample-output-of-json-files-generated","title":"Sample output of JSON files generated","text":"Metadata JSON used for registering pool (one that will be hosted URL used to define pool, eg: https://hosting.site/poolmeta.json)
{\n\"name\":\"Test\",\n\"ticker\":\"TEST\",\n\"description\":\"For demo purposes only\",\n\"homepage\":\"https://hosting.site\",\n\"nonce\":\"1595816423\",\n\"extended\":\"https://hosting.site/poolmeta_extended.json\"\n}\n
Extended Metadata JSON used for hosting additional metadata (hosted at URL referred in extended
field above, thus - eg : https://hosting.site/poolmeta_extended.json)
{\n\"itn\": {\n\"owner\": \"ed25519_pk1...\",\n\"witness\": \"ed25519_sig1...\"\n}\n}\n
"},{"location":"Scripts/logmonitor/","title":"Log Monitor","text":"Reminder !!
Ensure the Pre-Requisites are in place before you proceed.
logMonitor.sh
is a general purpose JSON log monitoring script for traces created by cardano-node
. Currently, it looks for traces related to leader slots and block creation but other uses could be added in the future.
For the core node (block producer) the logMonitor.sh
script can be run to monitor the JSON log file created by cardano-node
for traces related to leader slots and block creation.
For optimal coverage, it's best run together with CNCLI scripts as they provide different functionalities. Together, they create a complete picture of blocks assigned, created, validated or invalidated due to node issues.
"},{"location":"Scripts/logmonitor/#installation","title":"Installation","text":"The script is best run as a background process. This can be accomplished in many ways but the preferred method is to run it as a systemd service. A terminal multiplexer like tmux or screen could also be used but not covered here.
Use the deploy-as-systemd.sh
script to create a systemd unit file (deployed together with CNCLI). Log output is handled by syslog and end up in the systems standard syslog file, normally /var/log/syslog
. journalctl -f -u cnode-logmonitor.service
can be used to check service output (follow mode). Other logging configurations are not covered here.
Best viewed in CNTools or gLiveView. See CNCLI for example output.
"},{"location":"Scripts/sendalerts/","title":"Sendalerts","text":"!> Ensure the Pre-Requisites are in place before you proceed.
This section describes the ways in which CNTools can send important messages to the operator.
"},{"location":"Scripts/sendalerts/#telegram-alerts","title":"Telegram alerts","text":"If known but unwanted errors occur on your node, or if characteristic values indicate an unusual status , CNTools can send you Telegram alert messages.
To do this, you first have to activate your own bot and link it to your own Telegram user. Here is an explanation of how this works:
Open Telegram and search for \"botfather\".
Write him your wish: /newbot
.
Define a name for your bot, such as cntools_[POOLNAME]_alerts
.
Botfather will confirm the creation of your bot by giving you the unique bot access token. Keep it safe and private.
Now send at least one direct message to your new bot.
Open this URL in your browser by using your own, just created bot access token:
https://api.telegram.org/bot<your-access-token>/getUpdates\n
result.message.chat.id
. This chat id should be a large integer number.This is all you need to enable your Telegram alerts in the scripts/env
file - uncomment and add the chat ID to the TG_CHAT_ID
user variable in the env
file:
...\nTG_CHAT_ID=\"<YOUR_TG_CHAT_ID>\"\n... \n
"},{"location":"Scripts/topologyupdater/","title":"Topology Updater","text":"Reminder !!
The topologyUpdater shell script must be executed on the relay node as a cronjob exactly every 60 minutes. After 4 consecutive requests (3 hours) the node is considered a new relay node in listed in the topology file. If the node is turned off, it's automatically delisted after 3 hours.
"},{"location":"Scripts/topologyupdater/#download","title":"Download and Configure","text":"If you have run guild-deploy.sh, this should already be available in your scripts folder and make this step unnecessary.
Before the updater can make a valid request to the central topology service, it must query the current tip/blockNo from the well-synced local node. It connects to your node through the configuration in the script as well as the common env
configuration file. Customize these files for your needs.
To download topologyUpdater.sh
manually, you can execute the commands below and test executing Topology Updater once (it's OK if first execution gives back an error):
cd $CNODE_HOME/scripts\ncurl -s -o topologyUpdater.sh https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/topologyUpdater.sh\ncurl -s -o env https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/env\nchmod 750 topologyUpdater.sh\n./topologyUpdater.sh\n
"},{"location":"Scripts/topologyupdater/#modify","title":"Examine and modify the variables within topologyUpdater.sh script","text":"Out of the box, the scripts might come with some assumptions, that may or may not be valid for your environment. One of the common changes as an SPO would be to the complete CUSTOM_PEERS section as below to include your local relays/BP nodes (described in the How do I add my own nodes section), and any additional peers you'd like to be always available at minimum. Please do take time to update the variables in User Variables section in env
& topologyUpdater.sh
:
### topologyUpdater.sh\n\n######################################\n# User Variables - Change as desired #\n######################################\n\nCNODE_HOSTNAME=\"CHANGE ME\" # (Optional) Must resolve to the IP you are requesting from\nCNODE_VALENCY=1 # (Optional) for multi-IP hostnames\nMAX_PEERS=15 # Maximum number of peers to return on successful fetch\n#CUSTOM_PEERS=\"None\" # Additional custom peers to (IP,port[,valency]) to add to your target topology.json\n# eg: \"10.0.0.1,3001|10.0.0.2,3002|relays.mydomain.com,3003,3\"\n#BATCH_AUTO_UPDATE=N # Set to Y to automatically update the script if a new version is available without user interaction\n
Any customisations you add above, will be saved across future guild-deploy.sh
executions, unless you specify the -f
flag to overwrite completely.
systemd service The script can be deployed as a background service in different ways but the recommended and easiest way if guild-deploy.sh was used, is to utilize the deploy-as-systemd.sh
script to setup and schedule the execution. This will deploy both push & fetch service files as well as timers for a scheduled 60 min node alive message and cnode restart at the user set interval (default: 24 hours) when running the deploy script.
cnode-tu-push.service
: pushes a node alive message to Topology Updater APIcnode-tu-push.timer
: schedules the push service to execute once every hourcnode-tu-fetch.service
: fetches a fresh topology file before the cnode.service
file is started/restartedcnode-tu-restart.service
: handles the restart of cardano-node
(cnode.sh
)cnode-tu-restart.timer
: schedules the cardano-node
restart service, default every 24hsystemctl list-timers
can be used to to check the push and restart service schedule.
crontab job Another way to deploy the topologyUpdater.sh
script is as a crontab
job. Add the script to be executed once per hour at a minute of your choice (eg xx:25 o'clock in the example below). The example below will handle both the fetch and push in a single call to the script once an hour. In addition to the below crontab job for topologyUpdater, it's expected that you also add a scheduled restart of the relay node to pick up a fresh topology file fetched by topologyUpdater script with relays that are alive and well.
25 * * * * /opt/cardano/cnode/scripts/topologyUpdater.sh\n
"},{"location":"Scripts/topologyupdater/#logs","title":"Logs","text":"You can check the last result of push message in logs/topologyUpdater_lastresult.json
. If deployed as systemd service, use sudo journalctl -u <service>
to check output from service.
If one of the parameters is outside the allowed ranges, invalid or missing the returned JSON will tell you what needs to be fixed.
Don't try to execute the script more often than once per hour. It's completely useless and may lead to a temporary blacklisting.
"},{"location":"Scripts/topologyupdater/#why-does-my-topology-file-only-contain-iog-peers","title":"Why does my topology file only contain IOG peers?","text":"Each subscribed node (4 consecutive requests) is allowed to fetch a subset of other nodes to prove loyalty/stability of the relay. Until reaching this point, your fetch calls will only return IOG peers combined with any custom peers added in USER VARIABLES section of topologyUpdater.sh
script
The engineers of cardano-node
network stack suggested to use around 20 peers. More peers create unnecessary and unwanted system load and delays.
In its default setting, topologyUpdater returns a list of 15 remote peers.
Note that the change in topology is only effective upon restart of your node. Make sure you account for some scheduled restarts on your relays, to help onboard newer relays onto the network (as described in the systemd section).
"},{"location":"Scripts/topologyupdater/#how-do-i-add-my-own-relaysstatic-nodes-in-addition-to-dynamic-list-generated-by-topologyupdater","title":"How do I add my own relays/static nodes in addition to dynamic list generated by topologyUpdater?","text":"Most of the Stake Pool Operators may have few preferences (own relays, close friends, etc) that they would like to add to their topology by default. This is where the CUSTOM_PEERS
variable in topologyUpdater.sh
comes in. You can add a list of peers in the format of: hostname/IP:port[:valency]
here and the output topology.json
formed will already include the custom peers that you supplied. Every custom peer is defined in the form [address]:[port]
and optional :[valency]
(if not specified, the valency defaults to 1
). Multiple custom peers are separated by |
. An example of a valid CUSTOM_PEERS
variable would be:
CUSTOM_PEERS=\"foo.bar.io,3001,2|198.175.21.197,6001|36.233.3.89,6000\n
The list above would add three custom peers with the specified addresses and ports, with the first one additionally specifying the optional valency parameter (in this case 2
)."},{"location":"Scripts/topologyupdater/#how-are-the-peers-for-my-topology-file-selected","title":"How are the peers for my topology file selected?","text":"We calculate the distance on the Earth's surface from your node's IP to all subscribed peers. We then order the peers by distance (closest first) and start by selecting one peer. We then skip some, pick the next, skip, pick, skip, pick ... until we reach the end of the list (furthest away). The number of skipped records is calculated in a way to have the desired number of peers at the end.
Every requesting node has its personal distance to all other nodes.
We assume this should result in a well-distributed and interconnected peering network.
"},{"location":"docker/build/","title":"Build","text":""},{"location":"docker/build/#intro","title":"Intro","text":"\ud83d\udca1 Docker containers are the fastest way to run a Cardano node in both \"Relay\" and \"Block-Producing\" (Pool) mode.
"},{"location":"docker/build/#how-to-build","title":"How to build","text":"docker build -t cardanocommunity/cardano-node:latest - < dockerfile_bin\n
"},{"location":"docker/build/#for-windows-users","title":"For Windows Users","text":"With Powershell on Windows, you can run docker by typing the following command:
Get-Content dockerfile_bin | docker build -t guild-operators/cardano-node:latest -\n
"},{"location":"docker/build/#see-also","title":"See also","text":"Docker Tips
Docker Official Docs
"},{"location":"docker/docker/","title":"Overview","text":"Running your own Cardano node has never been so fast and easy.
But first, a kind reminder to the security aspects of running docker containers.
"},{"location":"docker/docker/#external-resources","title":"External resources","text":"Modular docker images based on Debian.
Based on the Guild's work we decided to build the Cardano Node images in 3 stages:
prereq.sh
to prepare the development environment before compiling the node source code. -> Stage1If you prefer to build the images your own than you can check:
The dockerfiles are located in ./files/docker/
Node Ports Wallet Ports Flavor Node (6000) Wallet (8090) Debian Prometheus (12798) Prometheus (12798) EKG (12781)"},{"location":"docker/run/","title":"Run","text":""},{"location":"docker/run/#os-requirements","title":"OS Requirements","text":"docker-ce
installed - Get Docker.Note
1) --entrypoint=bash
# This option won't start the node's container but only the OS running (the node software wont actually start, you'll need to manually execute entrypoint.sh ), ready to get in (trough the command docker exec -it < container name or hash > /bin/bash
) and play/explore around with it in command line mode. 2) all guild tools env variable can be used to start a new container using custom values by using the \"-e\" option. 3) CPU and RAM and SHared Memory allocation option for the container can be used when you start the container (i.e. --shm-size or --memory or --cpus official docker resource docs)
docker run --init -dit\n--name <YourCName>\n--security-opt=no-new-privileges\n-e NETWORK=mainnet\n-v <your_custom_path>:/opt/cardano/cnode/priv\n-v <your_custom_db_path>:/opt/cardano/cnode/db\ncardanocommunity/cardano-node\n
"},{"location":"docker/run/#use-cases_1","title":"Use Cases:","text":"docker run --init -dit\n--name <YourCName>\n--security-opt=no-new-privileges\n-e NETWORK=mainnet\n-p 6000:6000\n-v <your_custom_path>:/opt/cardano/cnode/priv\n-v <your_custom_db_path>:/opt/cardano/cnode/db\ncardanocommunity/cardano-node\n
docker run --init -dit\n--name <YourCName>\n--security-opt=no-new-privileges\n-e NETWORK=mainnet\n-e CONFIG=/opt/cardano/cnode/priv/<your own configuration files>.yml\n-p 6000:6000\n-v <your_custom_path>:/opt/cardano/cnode/priv\n-v <your_custom_db_path>:/opt/cardano/cnode/db\ncardanocommunity/cardano-node\n
"},{"location":"docker/security/","title":"Security","text":""},{"location":"docker/security/#docker-security-best-practices","title":"Docker Security best practices","text":""},{"location":"docker/security/#intro","title":"Intro","text":"On the security front, Docker developers are faced with different types of security attacks such as:
Docker containers are now being exploited to covertly mine for cryptocurrency, marking a shift from ransomware to cryptocurrency malware. As with all things in security, also Docker security is a moving target \u2014 so it\u2019s helpful to have access to up-to-date information, including experience-based best practices, for securing your containerized environments.
"},{"location":"docker/security/#here-below-some-key-concepts","title":"Here below some key concepts:","text":"Use a Third-Party Security Tool Docker allows you to use containers from untrusted public repositories, which increases the need to scrutinize whether the container was created securely and whether it is free of any corrupt or malicious files. For this, use a multi-purpose security tool that gives extensive dev-to-production security controls.(keep reading below)
Manage Vulnerability It is best to have a sound vulnerability management program that has multiple checks throughout the container lifecycle. Vulnerability management should incorporate quality gates to detect access issues and weaknesses for a potential exploit from dev-to-production environments.
Monitor and Audit Container Activity It is vital to monitor the container ecosystem and detect suspicious activity. Container monitoring activities provide real-time reports that can help you react promptly to a security breach.
Enable Docker Content Trust Docker Content Trustis a new feature incorporated into Docker 1.8. It is disabled by default, but once enabled, allows you to verify the integrity, authenticity, and publication date of all Docker images from the Docker Hub Registry.
Use Docker Bench for Security You should consider Docker Bench for Security as your must-use script. Once the script is run, you will notice a lot of information regarding configuration best practices for deploying Docker containers that can be used to further secure your Docker server and containers.
Resource Utilization To reduce performance impacts and denial-of-service attacks, it is a good practice to implement limits on the system resources that the containers can consume. If, for example, a web server is compromised, it helps to limit the impact to the other processes that are running on a host.
RBAC RBAC is role-based access control. If you have multiple users accessing you enviroment, this is a must-have. It can be quite expensive to implement but portainer makes it super easy.
Guild tips:
NEVER NEVER NEVER expose Docker API publicly!!!
(disabled by default)
Keep Docker Host Up-to-date
Reverse Proxy
Docker Socket Ownership
Run Docker Containers as Root
Use Trusted Docker Images
Use Privileged Mode Carefully
(This is usually done by adding --privileged you can use --security-opt=no-new-privileges
instead)Some more general tips:
\"--cap-drop ALL\"
DOCKER_OPTS= \"--iptables=false\"
With this quick guide you will be able to run a cardano node in seconds and also have the powerfull Koios SPO scripts built-in.
"},{"location":"docker/tips/#how-to-operate-interactively-within-the-container","title":"How to operate interactively within the container","text":"Once executed the container as a deamon with attached tty you are then able to enter the container by using the flag -dit
.
While if you have a hook within the container console, use the following command (change CN
with your container name):
docker exec -it CN bash
This command will bring you within the container bash env ready to use the Koios tools.
"},{"location":"docker/tips/#docker-flags-explained","title":"Docker flags explained","text":"\"docker build\" options explained:\n -t : option is to \"tag\" the image you can name the image as you prefer as long as you maintain the references between dockerfiles.\n\n\"docker run\" options explained:\n -d : for detach the container\n -i : interactive enabled -t : terminal session enabled\n -e : set an Env Variable\n -p : set exposed ports (by default if not specified the ports will be reachable only internally)\n--hostname : Container's hostname\n --name : Container's name\n
"},{"location":"docker/tips/#custom-container-with-your-own-cfg","title":"Custom container with your own cfg","text":"docker run --init -itd \n-name Relay # Optional (recommended for quick access): set a name for your newly created container.\n-p 9000:6000 # Optional: to expose the internal container's port (6000) to the host <IP> port 9000\n-e NETWORK=mainnet # Mandatory: mainnet / preprod / guild-mainnet / guild\n--security-opt=no-new-privileges # Option to prevent privilege escalations\n-v <YourNetPath>:/opt/cardano/cnode/sockets # Optional: useful to share the node socket with other containers\n-v <YourCfgPath>:/opt/cardano/cnode/priv # Optional: if used has to contain all the sensitive keys needed to run a node as core\n-v <YourDBbk>:/opt/cardano/cnode/db # Optional: if not set a fresh DB will be downloaded from scratch\ncardanocommunity/cardano-node:latest # Mandatory: image to run\n
Note
To be able to use the CNTools encryption key feature you need to manually change in \"cntools.config\" ENABLE_CHATTR to \"true\" and not use the --security-opt=no-new-privileges
docker run option.
The docker container has an optional backup and restore functionality that can be used to backup the /opt/cardano/cnode/db
directory. To have the backup persist longer than the countainer, the backup directory should be mounted as a volume.
[!NOTE] The backup and restore functionality is disabled by default.
[!WARNING] Make sure adequate space exists on the host as the backup will double the space consumed by the database.
"},{"location":"docker/tips/#creating-a-backup","title":"Creating a Backup","text":"When the container is started with the ENABLE_BACKUP environment variable set to Y the container will automatically create a backup in the /opt/cardano/cnode/backup/$NETWORK-db
directory. The backup will be created when the container is started and if the backup directory is smaller than the db directory.
When the container is started with the ENABLE_RESTORE environment variable set to Y the container will automatically restore the latest backup from the /opt/cardano/cnode/backup/$NETWORK-db
directory. The database will be restored when the container is started and if the backup directory is larger than the db directory.