From 71a19171404ceec437bf9ea22304f329f240bce0 Mon Sep 17 00:00:00 2001 From: rdlrt <3169068+rdlrt@users.noreply.github.com> Date: Mon, 27 Nov 2023 04:42:01 +0000 Subject: [PATCH] Move dbscripts SQL to koios-artifacts repo, and prep v1.1.0 (#1708) --- Scripts/gliveview/index.html | 17 ++------- docker/docker/index.html | 41 ++++++++++++++++++--- docker/run/index.html | 3 +- search/search_index.json | 2 +- sitemap.xml | 68 +++++++++++++++++------------------ sitemap.xml.gz | Bin 490 -> 491 bytes 6 files changed, 76 insertions(+), 55 deletions(-) diff --git a/Scripts/gliveview/index.html b/Scripts/gliveview/index.html index 3beaf418a..76facc428 100644 --- a/Scripts/gliveview/index.html +++ b/Scripts/gliveview/index.html @@ -889,21 +889,8 @@

gLiveView

Ensure the Pre-Requisites are in place before you proceed.

Koios gLiveView is a local monitoring tool to use in addition to remote monitoring tools like Prometheus/Grafana, Zabbix or IOG's RTView. This is especially useful when moving to a systemd deployment - if you haven't done so already - as it offers an intuitive UI to monitor the node status.

-

The tool is independent from other files and can run as a standalone utility that can be stopped/started without affecting the status of cardano-node.

-
Download⚓︎
-

If you've used guild-deploy.sh, you can skip this part, as this is already set up for you. The tool relies on the common env configuration file. -To get current epoch blocks, the logMonitor.sh script is needed (and can be combined with CNCLI). This is optional and Koios gLiveView will function without it.

-
-

Note

-

For those who follow the folder structure in this repo and do not wish to run guild-deploy.sh, you can run the below in $CNODE_HOME/scripts folder

-
-

To download the script:

-
curl -s -o gLiveView.sh https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/gLiveView.sh
-curl -s -o env https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/env
-chmod 755 gLiveView.sh
-
Configuration & Startup⚓︎
-

For most setups, it's enough to set CNODE_PORT in the env file. The rest of the variables should automatically be detected. If required, modify User Variables in env and gLiveView.sh to suit your environment (if folder structure you use is different). This should lead you to a stage where you can now start running ./gLiveView.sh in the folder you downloaded the script (the default location would be $CNODE_HOME/scripts). Note that the script is smart enough to automatically detect when you're running as a Core or Relay and will show fields accordingly.

+

For most setups, it's enough to set CNODE_PORT in the env file. The rest of the variables should automatically be detected. If required, modify User Variables in env and gLiveView.sh to suit your environment (if the environment is customised). This should lead you to a stage where you can now start running ./gLiveView.sh in the folder you downloaded the script (the default location would be $CNODE_HOME/scripts). Note that the script is smart enough to automatically detect when you're running as a Core or Relay and will show fields accordingly.

The tool can be run in legacy mode with only standard ASCII characters for terminals with trouble displaying the box-drawing characters. Run ./gLiveView.sh -h to show available command-line parameters or permanently set it directly in script.

A sample output from both core and relay together with peer analysis:

@@ -938,7 +925,7 @@
Upper main section -
  • Block propagation - Last delay measures the duration between when the last block was scheduled to be produced and when the node learned about it. Late blocks are blocks whose delay is larger than 5s. If the node is not synching, the number of late blocks needs to stay low. Within ⅓/5s estimates the chance of observing a delay of ⅓/5s (based on the delays observed for previous blocks). A healthy node needs to stay above 95% of blocks within 3s. Finally, served blocks counts how many blocks were fetched by "in" peers. If this does not increase for a long time, it means the "in" peers are learning about new blocks from somewhere else (and therefore this node is not contributing towards accelerating the propagation). Overall, these metrics are helpful in tweaking the topology and/or performance of the network links.
  • +
  • Block propagation - Last Block measures the duration between when the last block was scheduled to be produced and when the node learned about it. Late blocks are blocks whose delay is larger than 5s. If the node is not synching, the number of late blocks needs to stay low. Within ⅓/5s estimates the chance of observing a delay of ⅓/5s (based on the delays observed for previous blocks). A healthy node needs to stay above 95% of blocks within 3s. Finally, served blocks counts how many blocks were fetched by "in" peers. If this does not increase for a long time, it means the "in" peers are learning about new blocks from somewhere else (and therefore this node is not contributing towards accelerating the propagation). Overall, these metrics are helpful in tweaking the topology and/or performance of the network links.
  • Core section⚓︎

    If the node is run as a core, identified by the 'forge-about-to-lead' parameter, a second core section is displayed.

    diff --git a/docker/docker/index.html b/docker/docker/index.html index e9d99c196..e2e34fa41 100644 --- a/docker/docker/index.html +++ b/docker/docker/index.html @@ -744,6 +744,19 @@ 🔔 Built-in Cardano software + +
  • @@ -992,6 +1005,19 @@ 🔔 Built-in Cardano software + +
  • @@ -1099,12 +1125,18 @@

    🔔 Built-in Cardano softwareMithril⚓︎

    🔔 Built-in tools⚓︎

    • CNTools
    • gLiveView
    • CNCLI
    • +
    • Ogmios
    • +
    • Cardano Hardware CLI
    • +
    • Cardano Signer
    • Monitoring ready (with EKG and Prometheus)

    Docker Splash screen⚓︎

    @@ -1119,11 +1151,12 @@

    CNCLI

    CNCLI

    Guild Operators Docker strategy ( mainnet/ preview / preprod / guild)⚓︎

    Modular docker images based on Debian.

    -

    Based on the Guild's work we decided to build the Cardano Node images in 3 stages:

    +

    Based on the Guild's work the Cardano Node image is built in a single stage: -> dockerfile_bin

      -
    • 1st stage: it uses prereq.sh to prepare the development environment before compiling the node source code. -> Stage1
    • -
    • 2nd stage: based on stage1, this stage intent is to compile and produce the binaries of the node. -> Stage2
    • -
    • 3rd stage: based upon a minimal debian image it incorporates the node's binaries as well as all the Koios' SPO tools. -> Stage3
    • +
    • Uses guild-deploy.sh to:
    • +
    • Install the os prerequisites
    • +
    • Add the cardano software from release binaries
    • +
    • Add the guild's SPO tools and the node's configuration files.

    Additional docs⚓︎

    If you prefer to build the images your own than you can check:

    diff --git a/docker/run/index.html b/docker/run/index.html index fc7a9187f..fe3d94bd8 100644 --- a/docker/run/index.html +++ b/docker/run/index.html @@ -1025,7 +1025,8 @@

    Use Cases:Note

    1) --entrypoint=bash # This option won't start the node's container but only the OS running (the node software wont actually start, you'll need to manually execute entrypoint.sh ), ready to get in (trough the command docker exec -it < container name or hash > /bin/bash) and play/explore around with it in command line mode. 2) all guild tools env variable can be used to start a new container using custom values by using the "-e" option. -3) CPU and RAM and SHared Memory allocation option for the container can be used when you start the container (i.e. --shm-size or --memory or --cpus official docker resource docs)

    +3) CPU and RAM and Shared Memory allocation option for the container can be used when you start the container (i.e. --shm-size or --memory or --cpus official docker resource docs) +4) --env MITHRIL_DOWNLOAD=Y # This option will allow Mithril client to download the latest Mithril snapshot of the blockchain when the container starts and does not have a copy of the blockchain yet. This is useful when you want to start a new node from scratch and don't want to wait for the node to sync from the network. This option is only available for the mainnet, preprod, and preview networks.

  • diff --git a/search/search_index.json b/search/search_index.json index d45fb933e..dd77feba6 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Home","text":"

    This documentation site (rather the repository itself) is created by some of the well known and experienced community members and contains instructions/information about various guild tools which simplify various stake-ops (setting up, managing and monitoring pools) for operators. Note that the guides are present to help you simplify your tasks - but as an entity responsible for creating blocks on a financial platform, we expect some basic pre-requisite skill sets - at professional level - before entering the portal:

    Everyone is welcome to contribute to the repository (via documentation, testing, code, videos, etc). Our aim is to work together and reduce confusion rather than hosting 100 versions of documentation - each marketing their pool in a way.

    "},{"location":"#support","title":"Support","text":"

    The Telegram Support channel is used to announce new releases and changes to the code base. This is also the place to ask general questions regarding the documentation and scripts on this site.

    To report bugs and issues with scripts and documentation please open a GitHub Issue. Feature requests are best opened as a discussion thread.

    "},{"location":"#getting-started","title":"Getting Started","text":"

    Use the sidebar to navigate through the topics. Note that the instructions assume the folder structure as per here.

    Again, Feedback/Contribution and ownership of tasks is always welcome. If you're interested in collaborating regularly, make a start - and you should be part of the guild already .

    "},{"location":"basics/","title":"Basics","text":""},{"location":"basics/#architecture","title":"Architecture","text":"

    The architecture for various components are already described at docs.cardano.org by CF/IOHK. We will not reinvent the wheel

    "},{"location":"basics/#manual-software-pre-requirements","title":"Manual Software Pre-Requirements","text":"

    While we do not intend to hand out step-by-step instructions, the tools are often misused as a shortcut to avoid ensuring base skillsets mentioned on home page. Some of the common gotchas that we often find SPOs to miss out on:

    - It is imperative that pools operate with highly accurate system time, in order to propogate blocks to network in a timely manner and avoid penalties to own (or at times other competing) blocks. Please refer to sample guidance [here ](https://ubuntu.com/server/docs/network-ntp) for details - the precise steps may depend on your OS.\n- Ensure your Firewall rules at Network as well as OS level are updated according to the usage of your system, you'd want to whitelist the rules that you really need to open to world (eg: You might need node, SSH, and potentially secured webserver/proxy ports to be open, depending on components you run).\n- Update your SSH Configuration to prevent password-based logon.\n- Ensure that you use offline workflow, you should never require to have your offline keys on online nodes. The tools provide you backup/restore functionality to only pass online keys to online nodes.\n
    "},{"location":"basics/#pre-requisites","title":"Pre-Requisites","text":"

    Reminder !!

    You're expected to run the commands below from same session, using same working directories as indicated and using a non-root user with sudo access. You are expected to be familiar with this as part of pre-requisite skill sets for stake pool operators.

    "},{"location":"basics/#os-prereqs","title":"Set up OS packages, folder structure and fetch files from repo","text":"

    The pre-requisites for Linux systems are automated to be executed as a single script. To download the pre-requisites scripts, execute the below:

    mkdir \"$HOME/tmp\";cd \"$HOME/tmp\"\n# Install curl\n# CentOS / RedHat - sudo dnf -y install curl\n# Ubuntu / Debian - sudo apt -y install curl\ncurl -sS -o guild-deploy.sh https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/guild-deploy.sh\nchmod 755 guild-deploy.sh\n

    Please familiarise with the syntax of guild-deploy.sh before proceeding. The usage syntax can be checked using ./guild-deploy.sh -h , sample output below:

    Usage: guild-deploy.sh [-n <mainnet|preprod|guild|preview>] [-p path] [-t <name>] [-b <branch>] [-u] [-s [p][b][l][f][d][c][o][w][x]]\nSet up dependencies for building/using common tools across cardano ecosystem.\nThe script will always update dynamic content from existing scripts retaining existing user variables\n\n-n    Connect to specified network instead of mainnet network (Default: connect to cardano mainnet network) eg: -n guild\n-p    Parent folder path underneath which the top-level folder will be created (Default: /opt/cardano)\n-t    Alternate name for top level folder - only alpha-numeric chars allowed (Default: cnode)\n-b    Use alternate branch of scripts to download - only recommended for testing/development (Default: master)\n-u    Skip update check for script itself\n-s    Selective Install, only deploy specific components as below:\n  p   Install common pre-requisite OS-level Dependencies for most tools on this repo (Default: skip)\nb   Install OS level dependencies for tools required while building cardano-node/cardano-db-sync components (Default: skip)\nl   Build and Install libsodium fork from IO repositories (Default: skip)\nf   Force overwrite entire content of scripts and config files (backups of existing ones will be created) (Default: skip)\nd   Download latest (released) binaries for bech32, cardano-address, cardano-node, cardano-cli, cardano-db-sync and cardano-submit-api binaries (Default: skip)\nc   Install/Upgrade CNCLI binary (Default: skip) # (1)!\no   Install/Upgrade Ogmios Server binary (Default: skip)\nw   Install/Upgrade Cardano Hardware CLI (Default: skip)\nx   Install/Upgrade Cardano Signer binary (Default: skip)\n
    1. If you receive an error for glibc, it would likely be due to the build mismatch between pre-compiled binary and your OS, which is not uncommon. You may need to compile cncli manually on your OS as per instructions here - make sure to copy the output binary to \"${HOME}/.local/bin\" folder.

    This script uses opt-in election of what you'd like the script to do (as against previous version that used to try and auto-detect versions). The defaults without any arguments will only update static part of script contents for you. A typical example install to install most components but not overwrite static part of existing files for preview network would be:

    ./guild-deploy.sh -b master -n preview -t cnode -s pdlcowx\n. \"${HOME}/.bashrc\"\n

    If instead of download, you'd want to build the components yourself, you could use:

    ./guild-deploy.sh -b master -n preview -t cnode -s pblcowx\n. \"${HOME}/.bashrc\"\n

    Lastly, if you'd want to update your scripts but not install any additional dependencies, you may simply run:

    ./guild-deploy.sh -b master -n preview -t cnode\n
    "},{"location":"basics/#folder-structure","title":"Folder structure","text":"

    Running the script above will create the folder structure as per below, for your reference. You can replace the top level folder /opt/cardano/cnode by editing the value of CNODE_HOME in ~/.bashrc and $CNODE_HOME/files/env files:

    /opt/cardano/cnode            # Top-Level Folder\n\u251c\u2500\u2500 ...\n\u251c\u2500\u2500 files                     # Config, genesis and topology files\n\u2502   \u251c\u2500\u2500 ...\n\u2502   \u251c\u2500\u2500 byron-genesis.json    # Byron Genesis file referenced in config.json\n\u2502   \u251c\u2500\u2500 shelley-genesis.json  # Genesis file referenced in config.json\n\u2502   \u251c\u2500\u2500 alonzo-genesis.json    # Alonzo Genesis file referenced in config.json\n\u2502   \u251c\u2500\u2500 config.json           # Config file used by cardano-node\n\u2502   \u2514\u2500\u2500 topology.json         # Map of chain for cardano-node to boot from\n\u251c\u2500\u2500 db                        # DB Store for cardano-node\n\u251c\u2500\u2500 guild-db                  # DB Store for guild-specific tools and additions (eg: cncli, cardano-db-sync's schema)\n\u251c\u2500\u2500 logs                      # Logs for cardano-node\n\u251c\u2500\u2500 priv                      # Folder to store your keys (permission: 600)\n\u251c\u2500\u2500 scripts                   # Scripts to start and interact with cardano-node\n\u2514\u2500\u2500 sockets                   # Socket files created by cardano-node\n
    "},{"location":"build/","title":"Overview","text":"

    The documentation here uses instructions from IOHK repositories as a foundation, with additional info which we can contribute to where appropriate. Note that not everyone needs to build each component. You can refer to architecture to understand and qualify which of the components built by IO you want to run.

    "},{"location":"build/#components","title":"Components","text":"

    For most Pool Operators, simply building cardano-node should be enough. Use the below to decide whether you need other components:

    graph TB A([Interact with HD Walletslocally]) B([Explore blockchainlocally]) C([Easy pool-ops andfund management]) D([Create Custom Assets]) E([Monitor node using Terminal UI]) F([Sign/verify any datausing crypto keys]) N(Node) O(Ogmios) P(gRest/Koios) Q(DBSync) R(Wallet) S(CNTools) T(Tx Submit API) U(GraphQL) V(OfflineMetadataTools) X(gLiveView) Y(cardano-signer) Z[(PostgreSQL)] N --x C --x S N --x D --x S & V N --x E --x X N --x B B --x U --x Q B --x P --x Q P --x O P --x T F ---x Y N --x A --x R Q --x Z

    Important

    We strongly prefer use of gRest over GraphQL components due to performance, security, simplicity, control and most importantly - consistency benefits. Please refer to official documentations if you're interested in GraphQL or Cardano-Rest components instead.

    Note

    The instructions are intentionally limited to stack/cabal** to avoid wait times/availability of nix/docker files on a rapidly developing codebase - this also helps us prevent managing multiple versions of instructions.

    "},{"location":"build/#description-for-components-built-by-community","title":"Description for components built by community","text":""},{"location":"build/#cntools","title":"CNTools","text":"

    A swiss army knife for pool operators, primarily built by Ola, to simplify typical operations regarding their wallet keys and pool management. You can read more about it here

    "},{"location":"build/#gliveview","title":"gLiveView","text":"

    A local node monitoring tool, primarily built by Ola, to use in addition to remote monitoring tools like Prometheus/Grafana, Zabbix or IOG's RTView. This is especially useful when moving to a systemd deployment - if you haven't done so already - as it offers an intuitive UI to monitor the node status. You can read more about it here

    "},{"location":"build/#topology-updater","title":"Topology Updater","text":"

    A temporary node-to-node discovery solution, run by Markus, that was started initially to bridge the gap created while awaiting completion of P2P on cardano network, but has since become an important lifeline to the network health - to allow everyone to activate their relay nodes without having to postpone and wait for manual topology completion requests. You can read more about it here

    "},{"location":"build/#koiosgrest","title":"Koios/gRest","text":"

    A full-featured local query layer node to explore blockchain data (via dbsync) using standardised pre-built queries served via API as per standard from Koios - for which user can opt to participate in elastic query layer. You can read more about build steps here and reference API endpoints here

    "},{"location":"build/#ogmios","title":"Ogmios","text":"

    A lightweight bridge interface for cardano-node. It offers a WebSockets API that enables local clients to speak Ouroboros' mini-protocols via JSON/RPC. You can read more about it here

    "},{"location":"build/#cncli","title":"CNCLI","text":"

    A CLI tool written in Rust by Andrew Westberg for low-level communication with cardano-node. It is commonly used by SPOs to check their leader logs (integrates with CNTools as well as gLiveView) or to send their pool's health information to https://pooltool.io. You can read more about it here

    "},{"location":"build/#cardano-signer","title":"Cardano Signer","text":"

    A tool written by Martin to sign/verify data (hex, text or binary) using cryptographic keys to generate data as per CIP-8 or CIP-36 standards. You can read more about it here

    "},{"location":"contributors/","title":"Contributors","text":"

    Everyone is welcome to contribute to the guide, as well as the repository. Below is just a thank you to people who have been contributing consistently:

    Adam Chris Damjan Homer Markus OCG Ola Ahlman Pal Dorogi Papacarp PegasusPool Psychomb RdLrT RedOracle SmaugPool

    To start contributing, simply hit the github repository and raise Issue/Pull Request

    "},{"location":"grest-meets/","title":"GRest Meeting summaries","text":"

    Thank you all for joining and contributing to the project

    Below you can find a short summary of every GRest meeting held, both for logging purposes and for those who were not able to attend.

    "},{"location":"grest-meets/#participants","title":"Participants:","text":"Participant 16Sep2021 02Sep2021 26Aug2021 19Aug2021 12Aug2021 29Jul2021 22Jul2021 15Jul2021 09Jul2021 02Jul2021 25Jun2021 Damjan Homer Markus Ola RdLrT Red Papacarp Paddy GimbaLabs 16Sep2021 02Sep2021 26Aug2021 19Aug2021 12Aug2021 29Jul2021 22Jul2021 15Jul2021 09Jul2021 02Jul2021

    After the initial stand-up updates from participants, we went through the entire Trello board, updating/deleting existing tickets and creating some new ones.

    25Jun2021"},{"location":"grest-meets/#scheduling-running-update-queries","title":"Scheduling running update queries","text":""},{"location":"grest-meets/#refactor-of-queries","title":"Refactor of queries","text":""},{"location":"grest-meets/#postgres-tuning","title":"postgres tuning","text":""},{"location":"grest-meets/#updates","title":"Updates","text":""},{"location":"grest-meets/#queries","title":"Queries","text":""},{"location":"grest-meets/#problems","title":"Problems","text":""},{"location":"grest-meets/#actions","title":"Actions","text":""},{"location":"grest-meets/#queries_1","title":"Queries","text":""},{"location":"grest-meets/#transaction-submission-feature","title":"Transaction submission feature","text":""},{"location":"grest-meets/#db-replication-presentation-by-redoracle","title":"DB replication presentation by Redoracle","text":""},{"location":"grest-meets/#process-for-upgrading-our-instances","title":"Process for upgrading our instances:","text":""},{"location":"grest-meets/#queries_2","title":"Queries:","text":""},{"location":"grest-meets/#stake-distribution","title":"Stake distribution","text":""},{"location":"grest-meets/#tx-history","title":"Tx History","text":""},{"location":"grest-meets/#problems_1","title":"PROBLEMS","text":""},{"location":"grest-meets/#actions_1","title":"ACTIONS","text":""},{"location":"grest-meets/#problems_2","title":"PROBLEMS","text":""},{"location":"grest-meets/#actions_2","title":"ACTIONS","text":""},{"location":"grest-meets/#problems_3","title":"PROBLEMS","text":""},{"location":"grest-meets/#actions_3","title":"ACTIONS","text":"
    1. Team

      • catch live stake distributions in a separate table (in our grest schema)
        • these queries can run on a schedule
        • response comes from the instance with the latest data
      • other approaches:
        • possibly distribute pools between instances (complex approach)
        • run full query once and only check for new/leaving delegators (probably impossible because of existing delegator UTXO movements)
      • implement monitoring of execution times for all the queries
      • come up with a timeline for launch (next call)
      • stress test before launch
      • start building queries listed on Trello board
    2. Individual

      • sync db-sync instances to commit 84226d33eed66be8e61d50b7e1dacebdc095cee9 on release/10.1.x
      • update setups to reflect recent directory restructuring and updated instructions
    "},{"location":"grest-meets/#introduction-for-new-joiner-paddy","title":"Introduction for new joiner - Paddy","text":""},{"location":"grest-meets/#problems_4","title":"Problems","text":""},{"location":"grest-meets/#action-items","title":"Action Items","text":""},{"location":"grest-meets/#deployment-scripts","title":"Deployment scripts","text":"

    Ola added automatic deployment of services to the scripts last week. We added new tasks on Trello ticket, including flags for multiple networks (guild, testnet, mainnet), haproxy service dynamically creating hosts and doc updates. Overall, the script works well with some manual interaction still required at the moment.

    "},{"location":"grest-meets/#supported-networks","title":"Supported Networks","text":"

    Just for the record here, a 16GB (or even 8GB) instance is enough to support both testnet and guild networks.

    "},{"location":"grest-meets/#db-sync-versioning","title":"db-sync versioning","text":"

    We agreed to use the release/10.1.x branch which is not yet released but built to include Alonzo migrations to avoid rework later. This version does require Alonzo config and hash to be in the node's config.json. This has to be done manually and the files are available here. Once fully released, all members should rebuild the released version to ensure each instance is running the same code.

    "},{"location":"grest-meets/#dns-naming","title":"DNS naming","text":"

    For the DNS setup ticket, we started to think about the instance names for the 2 DNS instances (orange in the graph). Submissions for names will be made in the Telegram group, and will probably make a poll once we have the entries finalised.

    "},{"location":"grest-meets/#monitoring-system","title":"Monitoring System","text":"

    Priyank started setting up the monitoring on his instance which can then easily be switched to a separate monitoring instance. We agreed to use Prometheus / Grafana combo for data source / visualisation. We'll probably need to create some custom archiving of data to keep it long term as Prometheus stores only the last 30 days of data.

    "},{"location":"grest-meets/#next-meeting","title":"Next meeting","text":"

    We would like to make Friday @ 07:00 UTC the standard time and keep meetings at weekly frequency. A poll will still be created for next weeks, but if there are no objections / requests for switching the time around (which we have not had so far) we can go ahead with the making Friday the standard with polls no longer required and only reminders / Google invites sent every week.

    "},{"location":"grest-meets/#deployment-scripts_1","title":"Deployment scripts","text":"

    During the last week, work has been done on deployment scripts for all services (db-sync, gRest and haproxy) -> this is now in testing with updated instructions on trello. Everybody can put their name down on the ticket to signify when the setup is complete and note down any comments for bugs/improvements. This is the main priority at the moment as it would allow us to start transferring our setups to mainnet.

    "},{"location":"grest-meets/#switch-to-mainnet","title":"Switch to Mainnet","text":"

    Following on from that, we created a ticket for starting to set up mainnet instances -> we can use 32GB RAM to start and increase later. While making sure everything works against the guild network is priority, people are free to start on this as well as we anticipate we are almost ready for the switch.

    "},{"location":"grest-meets/#supported-networks_1","title":"Supported Networks","text":"

    This brings me to another discussion point which is on which networks are to be supported. After some discussion, it was agreed to keep beefy servers for mainnet, and have small independent instances for testnet maintained by those interested, while guild instance is pretty lightweight and useful to keep.

    "},{"location":"grest-meets/#monitoring-system_1","title":"Monitoring System","text":"

    The ticket for creating a centralised monitoring system was discussed and updated. I would say it would be good to have at least a basic version of the system in place around the time we switch to mainnet. The system could eventually serve for: - analysis of instance - performances and subsequent tuning - endpoints usage - anticipation of system requirements increases - etc.

    I would say that this should be an important topic of the next meeting to come up with an approach on how we will structure this system so that we can start building it in time for mainnet switch.

    "},{"location":"grest-meets/#handling-ssl","title":"Handling SSL","text":"

    Enabling SSL was agreed to not be required by each instance, but is optional and documentation should be created for how to automate the process of renewing SSL certificates for those wishing to add it to their instance. The end user facing endpoints \"Instance Checker\" will of course be SSL-enabled.

    "},{"location":"grest-meets/#next-meeting_1","title":"Next meeting","text":"

    We somewhat agreed to another meeting next week again at the same time, but some participants aren't 100% for availability. Friday at 07:00 UTC might be a good standard time we hold on to, but I will make a poll like last time so that we can get more info before confirming the meeting.

    "},{"location":"grest-meets/#meeting-structure","title":"Meeting Structure","text":"

    As this was the first meeting, at the start we discussed about the meeting structure. In general, we agreed to something like listed below, but this can definitely change in the future:

    1) 2-liner (60s) round the table stand-ups by everyone to sync up on what they were doing / are planning to do / mention struggles etc. This itself often sparks discussions. 2) going through the Trello board tasks with the intention of discussing and possbily assigning them to individuals / smaller groups (maybe 1-2-3 people choose to work together on a single task)

    "},{"location":"grest-meets/#stand-ups","title":"Stand-ups","text":"

    We then proceeded to give a status of where we are individually in terms of what's been done, a summary below:

    "},{"location":"grest-meets/#main-discussion-points","title":"Main discussion points","text":"
    1. Directory structure on the repo -> General agreement is to have anything related to db-sync/postgREST separated from the current cnode-helper-scripts directory. We can finalise the end locations of files a bit later, for now intent should be to simply add them all to /files/dbsync folder. prereqs.sh addendum can be done once artifacts are finalised (added a Trello ticket for tracking).
    2. DNS/haproxy configurations: We have two options: a. controlled approach for endpoints - wherein there is a layer of haproxy that will load balance and ensure tip being in sync for individual providers (individuals can provide haproxy OR gRest instances). b. completely decentralised - each client to maintain haproxy endpoint, and fails over to other node if its not up to recent tip. I think that in general, it was agreed to use a hybrid approach. Details are captured in diagram here. DNS endpoint can be reserved post initial testing of haproxy-agent against mainnet nodes.
    3. Internal monitoring system This would be important and useful and has not been mentioned before this meeting (as far as I know). Basically, a system for monitoring all of our instances together and also handling alerts. Not only for ensuring good quality of service, but also for logging and inspection of short- and long-term trends to better understand what's happening. A ticket is added to trello board
    "},{"location":"grest-meets/#next-meeting_2","title":"Next meeting","text":"

    All in all, I think we saw that there is need for these meetings as there are a lot of things to discuss and new ideas come up (like the monitoring system). We went for over an hour (~1h15min) and still didn't have enough time to go through the board, we basically only touched the DNS/haproxy part of the board. This tells me that we are in a stage where more frequent meetings are required, weekly instead of biweekly, as we are in the initial stage and it's important to build things right from the start rather than having to refactor later on. With that, the participants in general agreed to another meeting next week, but this will be confirmed in the TG chat and the times can be discussed then.

    "},{"location":"sidebar/","title":"Tree","text":""},{"location":"upgrade/","title":"Upgrade","text":"One-Time major upgrade for Koios Scripts from 20-Jan-2023 (expand for details)

    The scripts on guild-operators repository have gone through quite a few changes to accomodate for the below:

    Some of the above required us to add breaking changes to some scripts, but hopefully the above explains the premise for those changes. To ease this one-time upgrade process for existing deployments, we have tried to come up with the guide below, feel free to edit this file to improve the documents based on your experience. Again, apologies in advance to those who do not agree with the above changes (the old code would ofcourse remain unimpacted at tag legacy-scripts, so if you'd like to stick to old scripts , you can use -b legacy-scripts for your tools to switch back).

    "},{"location":"upgrade/#steps-for-ugrading","title":"Steps for Ugrading","text":"

    Warning

    Make sure you go through upgrade steps for your setup in a non-mainnet environment first!

    Remember

    Please add any environment-specific parameters (eg: custom top level folder, network flag, etc) to the execution command below, similar to prereqs.sh (check new syntax using guild-deploy.sh -h)

    mkdir \"$HOME/tmp\";cd \"$HOME/tmp\"\ncurl -sS -o guild-deploy.sh https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/guild-deploy.sh\nchmod 700 guild-deploy.sh\n./guild-deploy.sh -s f -b master\n
    source \"${HOME}\"/.bashrc\necho \"${PATH}\"\n

    You can move the binaries by using mv command (for example, if you dont have any other files in these folders, you can use the command below:

    Note

    Ideally, you should shutdown services (eg: cnode, cnode-dbsync, etc) prior to running the below to ensure they run from new location (you can also re-deploy them if you haven't done so in a while, eg: ./cnode.sh -d). At the end of the guide, you can start them back up.

    mv -t \"${HOME}\"/.local/bin/ \"${HOME}\"/.cabal/bin/* \"${HOME}\"/.cargo/bin/* \"${HOME}\"/bin/*\n
    whereis bech32 cardano-address cardano-cli cardano-db-sync cardano-hw-cli cardano-node cardano-submit-api cncli ogmios\n

    The above might result in some lines having more than one entry (eg: you might have cardano-cli in \"${HOME}\"/.cabal/bin and \"${HOME}\"/.local/bin) - for which you'd want to delete the reference(s) not in \"${HOME}\"/.local/bin , while for other cases - you might have no values (eg: you may not use cardano-db-sync, cncli, ogmios and/or cardano-hw-cli. You need not take any actions for the binaries you do not use.

    "},{"location":"upgrade/#supportimprovements","title":"Support/Improvements","text":"

    Hope the guide above helps you with the migration, but again - we could've missed some edge cases. If so, please report via chat in Koios Discussions channel only. Please DO NOT make edits to the script content based on forum/alternate guide/channels, while done with best intentions - there have been solutions put online that modify files unnecessarily instead of correcting configs and disabling updates, such actions will only cause trouble for future updates.

    "},{"location":"Appendix/RecoverByronWallet/","title":"Unofficial Instructions for recovering your Byron Era funds on the new Incentivized Shelley Testnet","text":""},{"location":"Appendix/RecoverByronWallet/#1-grab-and-install-haskell","title":"1. Grab and install Haskell","text":"
    curl -sSL https://get.haskellstack.org/ | sh\n
    "},{"location":"Appendix/RecoverByronWallet/#2-get-the-wallet","title":"2. Get the wallet","text":"

    note: you must build from source as of today as there are changes that just got into master you need

    git clone https://github.com/input-output-hk/cardano-wallet.git\n

    "},{"location":"Appendix/RecoverByronWallet/#3-go-into-the-wallet-directory","title":"3. Go into the wallet directory","text":"
    cd cardano-wallet\n
    "},{"location":"Appendix/RecoverByronWallet/#4-build-the-wallet","title":"4. Build the wallet","text":"

    stack build --test --no-run-tests\n
    If it fails there are a few reasons we have found: - The cardano build instructions reference a few things that may be missing. Check those. - or maybe one of these would help:

    "},{"location":"Appendix/RecoverByronWallet/#libssl","title":"Libssl:","text":"
    sudo apt install libssl-dev\n
    "},{"location":"Appendix/RecoverByronWallet/#sqlite","title":"Sqlite :","text":"
    sudo apt-get install sqlite3 libsqlite3-dev \n
    "},{"location":"Appendix/RecoverByronWallet/#gmp","title":"gmp:","text":"
    sudo apt-get install libgmp3-dev \n
    "},{"location":"Appendix/RecoverByronWallet/#systemd-dev","title":"systemd dev:","text":"
    sudo apt install libsystemd-dev\n

    get coffee... It takes awhile

    "},{"location":"Appendix/RecoverByronWallet/#5-when-its-done-install-executables-to-your-path","title":"5. When its done, install executables to your path","text":"
    stack install\n
    "},{"location":"Appendix/RecoverByronWallet/#6-test-to-make-sure-cardano-wallet-jormungandr-works-fine","title":"6. Test to make sure cardano-wallet-jormungandr works fine.","text":"

    Generate your new mnemonics you will need below. Note that this generates 15 words as opposed to your byron era mnemnomics which were only 12 words.

    cardano-wallet-jormungandr mnemonic generate\n
    "},{"location":"Appendix/RecoverByronWallet/#7-launch-the-wallet-as-a-service","title":"7. Launch the wallet as a service.","text":"

    you can either open another terminal window or use screen or something. anyway, wherever you run this next command you won't be able to use anymore for a terminal until you stop the wallet

    change --node-port 3001 to wherever you have your jormungandr rest interface running. for me it was 5001.. so

    change --port 3002 to wherever you want to access the wallet interface at. If you have other things running avoid those ports. for most, 3002 should be free

    just to future proof these instructions. genesis should be whatever genesis you are on.

    cardano-wallet-jormungandr serve --node-port 3001 --port 3002 --genesis-block-hash e03547a7effaf05021b40dd762d5c4cf944b991144f1ad507ef792ae54603197\n
    "},{"location":"Appendix/RecoverByronWallet/#8-restore-your-byron-wallet","title":"8. Restore your byron wallet:","text":"

    --->in another window

    replace foo, foo, foo with all your mnemnomics from the byron wallet you are restoring

    Also, if you put your wallet on a different port than 3002, fix that too

    curl -X POST -H \"Content-Type: application/json\" -d '{ \"name\": \"legacy_wallet\", \"mnemonic_sentence\": [\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\"], \"passphrase\": \"areallylongpassword\"}' http://localhost:3002/v2/byron-wallets\n
    Thats going to spit out some information about a wallet it creates, you should see the value of your wallet - hopefully its not zero. And you need the wallet ID for the next step

    "},{"location":"Appendix/RecoverByronWallet/#9-create-your-shelley-wallet","title":"9. Create your shelley wallet:","text":"

    Remember all those mnemnomics you made above.. put them here instead of all the foo's.

    curl -X POST -H \"Content-Type: application/json\" -d '{ \"name\": \"pool_wallet\", \"mnemonic_sentence\": [\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\"], \"passphrase\": \"areallylongpasswordagain\"}' http://localhost:3002/v2/wallets\n
    Important thing to get is the wallet id from this command

    "},{"location":"Appendix/RecoverByronWallet/#10-migrate-your-funds","title":"10. Migrate your funds","text":"

    Now you are ready to migrate your wallet. replace the <old wallet id> and <new wallet id> with the values you got above

    curl -X POST -H \"Content-Type: application/json\" -d '{\"passphrase\": \"areallylongpassword\"}' http://localhost:3002/v2/byron-wallets/<old wallet id>/migrations/<new wallet id>\n
    "},{"location":"Appendix/RecoverByronWallet/#11-congratulations-your-funds-are-now-in-your-new-wallet","title":"11. Congratulations. your funds are now in your new wallet.","text":"

    From here we recommend you send them to a new address entirely owned and created by jcli or whatever method you have been using for the testnet process.

    This technically may not be required. But a lot of us did it and we know it works for setting up pools and stuff.

    send a small amount first just to make sure you are in control of the transaction and don't send your funds to la la land.

    If you want to send to another address use the command below, but replace the address that you want to send it to, the amount, and your <new wallet id>

    curl -X POST -H \"Content-Type: application/json\" -d '{\"payments\": [ { \"address\": \"<address to send to>\"\", \"amount\": { \"quantity\": 83333330000000, \"unit\": \"lovelace\" } } ], \"passphrase\": \"areallylongpasswordagain\"}' http://localhost:3002/v2/wallets/<new wallet id>/transactions\n

    "},{"location":"Appendix/monitoring/","title":"Monitoring","text":"

    Ensure the Pre-Requisites are in place before you proceed.

    This is an easy-to-use script to automate setting up of monitoring tools. Tasks automates the following tasks: - Installs Prometheus, Node Exporter and Grafana Servers for your respective Linux architecture. - Configure Prometheus to connect to cardano node and node exporter jobs. - Provisions the installed prometheus server to be automatically available as data source in Grafana. - Provisions two of the common grafana dashboards used to monitor cardano-node by SkyLight and IOHK to be readily consumed from Grafana. - Deploy prometheus,node_exporter and grafana-server as systemd service on Linux. - Start and enable those services.

    Note that securing prometheus/grafana servers via TLS encryption and other security best practices are out of scope for this document, and its mainly aimed to help you get started with monitoring without much fuss.

    !> Ensure that you've opened the firewall port for grafana server (default used in this script is 5000)

    "},{"location":"Appendix/monitoring/#download-setup_monsh","title":"Download setup_mon.sh","text":"

    If you have run guild-deploy.sh, you can skip this step. To download monitoring script, you can execute the commands below:

    cd $CNODE_HOME/scripts\nwget https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/setup_mon.sh\nchmod 750 setup_mon.sh\n

    "},{"location":"Appendix/monitoring/#customise-any-environment-variables","title":"Customise any Environment Variables","text":"

    The default selection may not always be usable for everyone. You can customise further environment variable settings by opening in editor (eg: vi setup_mon.sh ), and updating variables below to your liking:

    #!/usr/bin/env bash\n# shellcheck disable=SC2209,SC2164\n\n######################################################################\n#### Environment Variables\n######################################################################\nCNODE_IP=127.0.0.1\nCNODE_PORT=12798\nGRAFANA_HOST=0.0.0.0\nGRAFANA_PORT=5000\nPROJ_PATH=/opt/cardano/monitoring\nPROM_HOST=127.0.0.1\nPROM_PORT=9090\nNEXP_PORT=$(( PROM_PORT + 1 ))\n````\n\n#### Set up Monitoring\n\nExecute setup_mon.sh with full path to destination folder you want to setup monitoring in. If you're following guild folder structure, you do not need to specify `-d`. Read the usage comments below before you run the actual script.\n\nNote that to deploy services as systemd, the script expect sudo access is available to the user running the script.\n\n``` bash\ncd $CNODE_HOME/scripts\n# To check Usage parameters:\n# ./setup_mon.sh -h\n#Usage: setup_mon.sh [-d directory] [-h hostname] [-p port]\n#Setup monitoring using Prometheus and Grafana for Cardano Node\n#-d directory      Directory where you'd like to deploy the packages for prometheus , node exporter and grafana\n#-i IP/hostname    IPv4 address or a FQDN/DNS name where your cardano-node (relay) is running (check for hasPrometheus in config.json; eg: 127.0.0.1 if same machine as cardano-node)\n#-p port           Port at which your cardano-node is exporting stats (check for hasPrometheus in config.json; eg: 12798)\n./setup_mon.sh\n# \n# Downloading prometheus v2.18.1...\n# Downloading grafana v7.0.0...\n# Downloading exporter v0.18.1...\n# Downloading grafana dashboard(s)...\n#   - SKYLight Monitoring Dashboard\n#   - IOHK Monitoring Dashboard\n# \n# NOTE: Could not create directory as rdlrt, attempting sudo ..\n# NOTE: No worries, sudo worked !! Moving on ..\n# Configuring components\n# Registering Prometheus as datasource in Grafana..\n# Creating service files as root..\n# \n# =====================================================\n# Installation is completed\n# =====================================================\n# \n# - Prometheus (default): http://127.0.0.1:9090/metrics\n#     Node metrics:       http://127.0.0.1:12798\n#     Node exp metrics:   http://127.0.0.1:9091\n# - Grafana (default):    http://0.0.0.0:5000\n# \n# \n# You need to do the following to configure grafana:\n# 0. The services should already be started, verify if you can login to grafana, and prometheus. If using 127.0.0.1 as IP, you can check via curl\n# 1. Login to grafana as admin/admin (http://0.0.0.0:5000)\n# 2. Add \"prometheus\" (all lowercase) datasource (http://127.0.0.1:9090)\n# 3. Create a new dashboard by importing dashboards (left plus sign).\n#   - Sometimes, the individual panel's \"prometheus\" datasource needs to be refreshed.\n# \n# Enjoy...\n# \n# Cleaning up...\n
    "},{"location":"Appendix/monitoring/#view-dashboards","title":"View Dashboards","text":"

    You should now be able to Login to grafana dashboard, using the public IP of your server, at port 5000. The initial credentials to login would be admin/admin, and you will be asked to update your password upon first login. Once logged on, you should be able to go to Manage > Dashboards and select the dashboard you'd like to view. Note that if you've just started the server, you might see graphs as empty, as initial interval for dashboards is 12 hours. You can change it to 5 minutes by looking at top right section of the page.

    Thanks to Pal Dorogi for the original setup instructions used for modifying.

    "},{"location":"Appendix/postgres/","title":"Sample Postgres Setup","text":"

    These deployment instructions used for reference while building cardano-db-sync tool, with the scope being ease of set up, and some tuning baselines for those who are new to Postgres DB. It is recommended to customise these as per your needs for Production builds.

    Important

    You'd find it pretty useful to set up ZFS on your system prior to setting up Postgres, to help with your IOPs throughput requirements. You can find sample install instructions here. You can set up your entire root mount to be on ZFS, or you can opt to mount a file as ZFS on \"${CNODE_HOME}\"

    "},{"location":"Appendix/postgres/#install-postgresql-server","title":"Install PostgreSQL Server","text":"

    Execute commands below to set up Postgres Server

    # Determine OS platform\nOS_ID=$( (grep -i ^ID_LIKE= /etc/os-release || grep -i ^ID= /etc/os-release) | cut -d= -f 2)\nDISTRO=$(grep -i ^NAME= /etc/os-release | cut -d= -f 2)\n\nif [ -z \"${OS_ID##*debian*}\" ]; then\n#Debian/Ubuntu\nwget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -\n  RELEASE=$(lsb_release -cs)\necho \"deb [arch=amd64] http://apt.postgresql.org/pub/repos/apt/ ${RELEASE}\"-pgdg main | sudo tee  /etc/apt/sources.list.d/pgdg.list\n  sudo apt-get update\n  sudo apt-get -y install postgresql-15 postgresql-server-dev-15 postgresql-contrib libghc-hdbc-postgresql-dev\n  sudo systemctl restart postgresql\n  sudo systemctl enable postgresql\nelse\necho \"We have no automated procedures for this ${DISTRO} system\"\nfi\n
    "},{"location":"Appendix/postgres/#create-user-in-postgres","title":"Create User in Postgres","text":"

    Login to Postgres instance as superuser:

    echo $(whoami)\n# <user>\nsudo su postgres\npsql\n

    Note the returned as the output of echo $(whoami) command. Replace all instance of in the documentation below. Execute the below in psql prompt. Replace and PasswordYouWant with your OS user (output of echo $(whoami) command executed above) and a password you'd like to authenticate to Postgres with:

    CREATE ROLE <user> SUPERUSER LOGIN;\nALTER USER <user> PASSWORD 'PasswordYouWant';\n\\q\n
    Type exit at shell to return to your user from postgres

    "},{"location":"Appendix/postgres/#verify-login-to-postgres-instance","title":"Verify Login to postgres instance","text":"
    export PGPASSFILE=$CNODE_HOME/priv/.pgpass\necho \"/var/run/postgresql:5432:cexplorer:*:*\" > $PGPASSFILE\nchmod 0600 $PGPASSFILE\npsql postgres\n# psql (15.0)\n# Type \"help\" for help.\n# \n# postgres=#\n
    "},{"location":"Appendix/postgres/#tuning-your-instance","title":"Tuning your instance","text":"

    Before you start populating your DB instance using dbsync data, now might be a good time to put some thought on to baseline configuration of your postgres instance by editing /etc/postgresql/15/main/postgresql.conf. Typically, you might find a lot of common standard practices parameters available in tuning guides. For our consideration, it would be nice to start with some baselines - for which we will use inputs from example here, which would need to be customised further to your environment and resources.

    In a typical Koios [gRest] setup, we use below for minimum viable specs (i.e. 64GB RAM, > 8 CPUs, >16K IOPs for ioping -q -S512M -L -c 10 -s8k . output when postgres data directory is on ZFS configured with max arc of 4GB), we find the below configuration to be the best common setup:

    Parameter Value Comment data_directory '/opt/cardano/cnode/guild-db/pgdb/15' Move postgres data directory to ZFS mount at /opt/cardano/cnode, ensure it's writable by postgres user effective_cache_size 8GB Be conservative as Node and DBSync by themselves will need ~32-40GB of RAM if ledger-state is enabled effective_io_concurrency 4 Can go higher if you have substantially higher IOPs/IO throughputs lc_time 'en_US.UTF-8' Just to use standard server-side time formatting between instances, can adapt to your preferences log_timezone 'UTC' For consistency, to avoid timezone confusions maintenance_work_mem 512MB Helps with vacuum/index/foreign key maintainance (with 4 workers, it's set to max 2GB) max_connections 200 Allow maximum of 200 connections, the koios connections are still controlled via postgrest db-pool max_parallel_maintenance_workers 4 Max workers postgres will use for maintainance max_parallel_workers 4 Max workers postgres will use across the system max_parallel_workers_per_gather 2 Parallel threads per query, do not increase to higher values as it will multiply memory usage max_wal_size 4GB Used for WAL automatic checkpoints (disabled later) max_worker_processes 4 Maximum number of background processes system can support min_wal_size 1GB Used for WAL automatic checkpoints (disabled later) random_page_cost 1.1 Use higher value if IOPs has trouble catching up (you can use 4 instead of 1.1) shared_buffers 4GB Conservative limit to allow for node/dbsync/zfs memory usage timezone 'UTC' For consistency, to avoid timezone confusions wal_buffers 16MB WAL consumption in shared buffer (disabled later) work_mem 16MB Base memory size before writing to temporary disk files

    In addition to above, due to the nature of usage by dbsync (synching from node and restart traversing back to last saved ledger-state snapshot), we leverage data retention on blockchain - as we're not affected by loss of volatile information upon a restart of instance. Thus, we can relax some of the data retention and protection against corruption related settings, as those are IOPs/CPU Load Average impacts that the instance does not need to spend. We'd recommend setting 3 of those below in your /etc/postgresql/15/main/postgresql.conf:

    Parameter Value wal_level minimal max_wal_senders 0 synchronous_commit off

    Once your changes are done, ensure to restart postgres service using sudo systemctl restart postgresql.

    "},{"location":"Build/dbsync/","title":"DBSync","text":"

    Important

    An average pool operator may not require cardano-db-sync at all. Please verify if it is required for your use as mentioned here.

    "},{"location":"Build/dbsync/#build-instructions","title":"Build Instructions","text":""},{"location":"Build/dbsync/#clone-the-repository","title":"Clone the repository","text":"

    Execute the below to clone the cardano-db-sync repository to $HOME/git folder on your system:

    cd ~/git\ngit clone https://github.com/input-output-hk/cardano-db-sync\ncd cardano-db-sync\n
    "},{"location":"Build/dbsync/#build-cardano-db-sync","title":"Build Cardano DB Sync","text":"

    You can use the instructions below to build the latest release of cardano-db-sync.

    git fetch --tags --all\ngit pull\n# Include the cardano-crypto-praos and libsodium components for db-sync\n# On CentOS 7 (GCC 4.8.5) we should also do\n# echo -e \"package cryptonite\\n  flags: -use_target_attributes\" >> cabal.project.local\n# Replace tag against checkout if you do not want to build the latest released version\ngit checkout $(curl -s https://api.github.com/repos/input-output-hk/cardano-db-sync/releases/latest | jq -r .tag_name)\n# Use `-l` argument if you'd like to use system libsodium instead of IOG fork of libsodium while compiling\n$CNODE_HOME/scripts/cabal-build-all.sh\n
    The above would copy the cardano-db-sync binary into ~/.local/bin folder.

    "},{"location":"Build/dbsync/#prepare-db-for-sync","title":"Prepare DB for sync","text":"

    Now that binaries are available, let's create our database (when going through breaking changes, you may need to use --recreatedb instead of --createdb used for the first time. Again, we expect that PGPASSFILE environment variable is already set (refer to the top of this guide for sample instructions):

    cd ~/git/cardano-db-sync\n# scripts/postgresql-setup.sh --dropdb #if exists already, will fail if it doesnt - thats OK\nscripts/postgresql-setup.sh --createdb\n# Password:\n# Password:\n# All good!\n

    Verify you can see \"All good!\" as above!

    "},{"location":"Build/dbsync/#create-symlink-to-schema-folder","title":"Create Symlink to schema folder","text":"

    DBSync instance requires the schema files from the git repository to be present and available to the dbsync instance. You can either clone the ~/git/cardano-db-sync/schema folder OR create a symlink to the folder and make it available to the startup command we will be using. We will use the latter in sample below:

    ln -s ~/git/cardano-db-sync/schema $CNODE_HOME/guild-db/schema\n
    "},{"location":"Build/dbsync/#restore-using-snapshot","title":"Restore using Snapshot","text":"

    If you're running a mainnet/preview/preprod instance of dbsync, you might want to consider use of dbsync snapshots as documented here. The snapshot files as of recent epoch are available via links in release notes.

    At high-level, this would involve steps as below (read and update paths as per your environment):

    # Replace the actual link below with the latest one from release notes\nwget https://update-cardano-mainnet.iohk.io/cardano-db-sync/13/db-sync-snapshot-schema-13-block-7622755-x86_64.tgz\nrm -rf ${CNODE_HOME}/guild-db/ledger-state ; mkdir -p ${CNODE_HOME}/guild-db/ledger-state\ncd -; cd ~/git/cardano-db-sync\nscripts/postgresql-setup.sh --restore-snapshot /tmp/dbsyncsnap.tgz ${CNODE_HOME}/guild-db/ledger-state\n# The restore may take a while, please be patient and do not interrupt the restore process. Once restore is successful, you may delete the downloaded snapshot as below:\n#   rm -f /tmp/dbsyncsnap.tgz\n
    "},{"location":"Build/dbsync/#test-running-dbsync-manually-at-terminal","title":"Test running dbsync manually at terminal","text":"

    In order to verify that you can run dbsync, before making a start - you'd want to ensure that you can run it interactively once. To do so, try the commands below:

    cd $CNODE_HOME/scripts\nexport PGPASSFILE=$CNODE_HOME/priv/.pgpass\n./dbsync.sh\n

    You can monitor logs if needed via parallel session using tail -10f $CNODE_HOME/logs/dbsync.json. If there are no error, you would want to press Ctrl-C to stop the dbsync.sh execution and deploy it as a systemd service. To do so, use the commands below (the creation of file is done using sudo permissions, but you can always deploy it manually):

    cd $CNODE_HOME/scripts\n./dbsync.sh -d\n# Deploying cnode-dbsync.service as systemd service..\n# cnode-dbsync.service deployed successfully!!\n

    Now to start dbsync instance, you can run sudo systemctl start cnode-dbsync

    Note

    Note that dbsync while syncs, it might defer creation of indexes/constraints to speed up initial catch up. Once relatively closer to tip, this will initiate creation of indexes - which can take a while in background. Thus, you might notice the query timings right after reaching to tip might not be as good.

    "},{"location":"Build/dbsync/#update-dbsync","title":"Update DBSync","text":"

    Updating dbsync can have different tasks depending on the versions involved. We attempt to briefly explain the tasks involved:

    "},{"location":"Build/dbsync/#validation","title":"Validation","text":"

    To validate, connect to your postgres instance and execute commands as per below:

    export PGPASSFILE=$CNODE_HOME/priv/.pgpass\npsql cexplorer\n

    You should be at the psql prompt, you can check the tables and verify they're populated:

    \\dt\nselect * from meta;\n

    A sample output of the above two commands may look like below (the number of tables and names may vary between versions):

    cexplorer=# \\dt\nList of relations\n Schema |           Name            | Type  | Owner\n--------+---------------------------+-------+-------\n public | ada_pots                  | table | centos\n public | admin_user                | table | centos\n public | block                     | table | centos\n public | delegation                | table | centos\n public | delisted_pool             | table | centos\n public | epoch                     | table | centos\n public | epoch_param               | table | centos\n public | epoch_stake               | table | centos\n public | ma_tx_mint                | table | centos\n public | ma_tx_out                 | table | centos\n public | meta                      | table | centos\n public | orphaned_reward           | table | centos\n public | param_proposal            | table | centos\n public | pool_hash                 | table | centos\n public | pool_meta_data            | table | centos\n public | pool_metadata             | table | centos\n public | pool_metadata_fetch_error | table | centos\n public | pool_metadata_ref         | table | centos\n public | pool_owner                | table | centos\n public | pool_relay                | table | centos\n public | pool_retire               | table | centos\n public | pool_update               | table | centos\n public | pot_transfer              | table | centos\n public | reserve                   | table | centos\n public | reserved_ticker           | table | centos\n public | reward                    | table | centos\n public | schema_version            | table | centos\n public | slot_leader               | table | centos\n public | stake_address             | table | centos\n public | stake_deregistration      | table | centos\n public | stake_registration        | table | centos\n public | treasury                  | table | centos\n public | tx                        | table | centos\n public | tx_in                     | table | centos\n public | tx_metadata               | table | centos\n public | tx_out                    | table | centos\n public | withdrawal                | table | centos\n(37 rows)\n\n\n\nselect * from meta;\n id |     start_time      | network_name\n----+---------------------+--------------\n  1 | 2017-09-23 21:44:51 | mainnet\n(1 row)\n
    "},{"location":"Build/graphql/","title":"Graphql","text":"

    !> We have stopped maintaining documentation for Cardano-GraphQL, and prefer use of PostgREST instead. The specific component does not follow the process/technology/language (requires npm, yarn) used by other components (cabal/stack), and the value provided by cardano-graphql over the (haskell-based) hasura instance has been negligible. Also, an average pool operator may not require cardano-graphql at all, please verify if it is required for your use as mentioned here. The instructions below are out of date.

    Ensure the Pre-Requisites are in place before you proceed.

    "},{"location":"Build/graphql/#build-hasura-graphql-engine","title":"Build Hasura graphql-engine","text":"

    Going with the spirit of the documentation here, instruction to build the graphql-engine binary :)

    cd ~/git\ngit clone https://github.com/hasura/graphql-engine\ncd graphql-engine/server\n$CNODE_HOME/scripts/cabal-build-all.sh\n
    This should make graphql-engine available at ~/.local/bin.

    "},{"location":"Build/graphql/#build-cardano-graphql","title":"Build cardano-graphql","text":"

    The build will fail if you are running a version of node.js earlier than 10.0.0 (which could happen if you have a conflicting version in your $PATH). You can verify your node version by executing the below:

    #check your version of node.js\nnode -v\n#if response is 10.0.0 or higher build can proceed. \n

    The commands below will help you compile the cardano-graphql node:

    cd ~/git\ngit clone https://github.com/input-output-hk/cardano-graphql\ncd cardano-graphql\ngit checkout v1.1.1\nyarn\n#yarn install v1.22.4\n# [1/4] Resolving packages...\n# [2/4] Fetching packages...\n# info fsevents@2.1.2: The platform \"linux\" is incompatible with this module.\n# info \"fsevents@2.1.2\" is an optional dependency and failed compatibility check. Excluding it from installation.\n# info fsevents@1.2.12: The platform \"linux\" is incompatible with this module.\n# info \"fsevents@1.2.12\" is an optional dependency and failed compatibility check. Excluding it from installation.\n# [3/4] Linking dependencies...\n# warning \" > graphql-type-datetime@0.2.4\" has incorrect peer dependency \"graphql@^0.13.2\".\n# warning \" > @typescript-eslint/eslint-plugin@1.13.0\" has incorrect peer dependency \"eslint@^5.0.0\".\n# warning \" > @typescript-eslint/parser@1.13.0\" has incorrect peer dependency \"eslint@^5.0.0\".\n# [4/4] Building fresh packages...\n# Done in 20.70s.\nyarn build\n# yarn run v1.22.4\n# $ yarn codegen:internal && yarn codegen:external && tsc -p . && shx cp src/schema.graphql dist/\n# $ graphql-codegen\n#   \u2714 Parse configuration\n#   \u2714 Generate outputs\n# $ graphql-codegen --config ./codegen.external.yml\n#   \u2714 Parse configuration\n#   \u2714 Generate outputs\n# Done in 38.11s.\ncd dist\nrsync -arvh ../node_modules ./\n

    "},{"location":"Build/graphql/#set-up-environment-for-cardano-graphql","title":"Set up environment for cardano-graphql","text":"

    cardano-graphql requires cardano-node, cardano-db-sync-extended, postgresql and graphql-engine to be set up and running. The below will help you map the components:

    export PGPASSFILE=$CNODE_HOME/priv/.pgpass\nIFS=':' read -r -a PGPASS <<< $(cat $PGPASSFILE)\nexport HASURA_GRAPHQL_ENABLE_TELEMETRY=false  # Optional.  To send usage data to Hasura, set to true.\nexport HASURA_GRAPHQL_DATABASE_URL=postgres://${PGPASS[3]}:${PGPASS[4]}@${PGPASS[0]}:${PGPASS[1]}/${PGPASS[2]}\nexport HASURA_GRAPHQL_ENABLE_CONSOLE=true\nexport HASURA_GRAPHQL_ENABLED_LOG_TYPES=\"startup, http-log, webhook-log, websocket-log, query-log\"\nexport HASURA_GRAPHQL_SERVER_PORT=4080\nexport HASURA_GRAPHQL_SERVER_HOST=0.0.0.0\nexport CACHE_ENABLED=true\nexport HASURA_URI=http://127.0.0.1:4080\ncd ~/git/cardano-graphql/dist\ngraphql-engine serve &\nnode index.js\n

    "},{"location":"Build/grest-changelog/","title":"Koios gRest Changelog","text":""},{"location":"Build/grest-changelog/#110rc-for-all-networks","title":"[1.1.0rc] - For all networks.","text":"

    This will be first major [breaking] release for Koios consumers in a while, and will be rolled out under new base prefix (/api/v1). The major work with this release was to start making use of newer flags in dbsync which help performance of queries under new endpoints. Also, you'd see quite a few new endpoint additions below, that'd be helping out with slightly lighter version of queries. To keep migration paths easier, we will ensure both v0 and v1 versions of the release is up for a month post release, before retiring v0.

    "},{"location":"Build/grest-changelog/#new-endpoints-added","title":"New endpoints added:","text":""},{"location":"Build/grest-changelog/#data-inputoutput-changes","title":"Data Input/Output Changes:","text":""},{"location":"Build/grest-changelog/#deprecations","title":"Deprecations:","text":""},{"location":"Build/grest-changelog/#chores","title":"Chores:","text":""},{"location":"Build/grest-changelog/#1010-for-all-networks","title":"[1.0.10] - For all networks.","text":"

    The release is effectively same as 1.0.10rc except with one minor modification below.

    "},{"location":"Build/grest-changelog/#chores_1","title":"Chores:","text":""},{"location":"Build/grest-changelog/#1010rc-for-non-mainnet-networks","title":"[1.0.10rc] - For non-mainnet networks","text":"

    This release primarily focuses on ability to support better DeFi projects alongwith some value addition for existing clients by bringing in 10 new endpoints (paired with 2 deprecations), and few additional optional input parameters , and some additional output columns to existing endpoints. The only breaking change/fix is for output returned for tx_info.

    Also, dbsync 13.1.x.x has been released and is recommended to be used for this release

    "},{"location":"Build/grest-changelog/#new-endpoints-added_1","title":"New endpoints added","text":""},{"location":"Build/grest-changelog/#data-inputoutput-changes_1","title":"Data Input/Output Changes","text":""},{"location":"Build/grest-changelog/#deprecations_1","title":"Deprecations:","text":""},{"location":"Build/grest-changelog/#chores_2","title":"Chores:","text":""},{"location":"Build/grest-changelog/#109-for-all-networks","title":"[1.0.9] - For all networks","text":"

    This release is effectively same as 1.0.9rc below (please check out the notes accordingly), just with minor bug fix on setup-grest.sh itself.

    "},{"location":"Build/grest-changelog/#109rc-for-non-mainnet-networks","title":"[1.0.9rc] - For non-mainnet networks","text":"

    This release candidate is non-breaking for existing methods and inputs, but breaking for output objects for endpoints. The aim with release candidate version is to allow folks couple of weeks to test, adapt their libraries before applying to mainnet.

    "},{"location":"Build/grest-changelog/#new-endpoints-added_2","title":"New endpoints added","text":""},{"location":"Build/grest-changelog/#data-inputoutput-changes_2","title":"Data Input/Output changes","text":""},{"location":"Build/grest-changelog/#changes-for-instance-providers","title":"Changes for Instance Providers","text":""},{"location":"Build/grest-changelog/#108-for-all-networks","title":"[1.0.8] - For all networks","text":"

    This release is contains minor bug-fixes that were discovered in koios-1.0.7. No major changes to output for this one.

    "},{"location":"Build/grest-changelog/#changes-for-api","title":"Changes for API","text":""},{"location":"Build/grest-changelog/#new-endpoints-added_3","title":"New endpoints added","text":""},{"location":"Build/grest-changelog/#data-inputoutput-changes_3","title":"Data Input/Output changes","text":""},{"location":"Build/grest-changelog/#changes-for-instance-providers_1","title":"Changes for Instance Providers","text":""},{"location":"Build/grest-changelog/#107-for-all-networks","title":"[1.0.7] - For all networks","text":"

    This release continues updates from koios-1.0.6 to further utilise stake-snapshot cache tables which would be useful for SPOs as well as reduce downtime post epoch transition. One largely requested feature to accept bulk inputs for many block/address/account endpoints is now complete. Additionally, koios instance providers are now recommended to use cardano-node 1.35.3 with dbsync 13.0.5.

    "},{"location":"Build/grest-changelog/#changes-for-api_1","title":"Changes for API","text":""},{"location":"Build/grest-changelog/#new-endpoints-added_4","title":"New endpoints added","text":""},{"location":"Build/grest-changelog/#data-inputoutput-changes_4","title":"Data Input/Output changes","text":""},{"location":"Build/grest-changelog/#changes-for-instance-providers_2","title":"Changes for Instance Providers","text":""},{"location":"Build/grest-changelog/#106106m-interim-release-for-all-networks-to-upgrade-to-dbsync-v13","title":"[1.0.6/1.0.6m] - Interim release for all networks to upgrade to dbsync v13","text":"

    The backlog of items not being added to mainnet has been increasing due to delays with Vasil HFC event to Mainnet. As such we had to come up with a split update approach. The mainnet nodes are still not qualified to be Vasil-ready (in our opinion) for 1.35.x , but dbsync 13 can be used against node 1.34.1 fine. In order to cater for this split, we have added an intermediate koios-1.0.6m tag that brings dbsync updates while maintaining node-1.34.1.

    "},{"location":"Build/grest-changelog/#changes-for-api_2","title":"Changes for API","text":""},{"location":"Build/grest-changelog/#data-output-changes","title":"Data Output Changes","text":""},{"location":"Build/grest-changelog/#changes-for-instance-providers_3","title":"Changes for Instance Providers","text":""},{"location":"Build/grest-changelog/#105-alpha-networks-only","title":"[1.0.5] - alpha networks only","text":"

    Since there have been a few deviations wrt Vasil for testnet and mainnet, this version only targets networks except Mainnet!

    "},{"location":"Build/grest-changelog/#changes-for-api_3","title":"Changes for API","text":""},{"location":"Build/grest-changelog/#data-output-changes_1","title":"Data Output Changes","text":""},{"location":"Build/grest-changelog/#changes-for-instance-providers_4","title":"Changes for Instance Providers","text":""},{"location":"Build/grest-changelog/#101","title":"[1.0.1]","text":""},{"location":"Build/grest-changelog/#100","title":"[1.0.0]","text":""},{"location":"Build/grest-changelog/#100-rc1","title":"[1.0.0-rc1]","text":""},{"location":"Build/grest-changelog/#changes-for-api_4","title":"Changes for API","text":""},{"location":"Build/grest-changelog/#data-output-changes_2","title":"Data Output Changes","text":""},{"location":"Build/grest-changelog/#input-parameter-changes","title":"Input Parameter Changes","text":""},{"location":"Build/grest-changelog/#changes-for-instance-providers_5","title":"Changes for Instance Providers","text":""},{"location":"Build/grest-changelog/#added","title":"Added","text":""},{"location":"Build/grest-changelog/#fixed","title":"Fixed","text":""},{"location":"Build/grest-changelog/#100-rc0-2022-04-29","title":"[1.0.0-rc0] - 2022-04-29","text":""},{"location":"Build/grest/","title":"Koios gRest","text":"

    Important

    "},{"location":"Build/grest/#what-is-grest","title":"What is gRest","text":"

    gRest is an open source implementation of a query layer built over dbsync using PostgREST and HAProxy. The package is built as part of Koios team's efforts to unite community individual stream of work together and give back a more aligned structure to query dbsync and adopt standardisation to queries utilising open-source tooling as well as collaboration. In addition to these, there are also accessibility features to deploy rules for failover, do healthchecks, set up priorities, have ability to prevent DDoS attacks, provide timeouts, report tips for analysis over a longer period, etc - which can prove to be really useful when performing any analysis for instances.

    Note

    Note that the scripts below do allow for provisioning ogmios integration too, but Ogmios - currently - is not designed to provide advanced session management for a server-client architecture in absence of a middleware. Thus, the availability for ogmios from monitoring instance is restricted to avoid ability to DDoS an instance.

    "},{"location":"Build/grest/#components","title":"Components","text":"
    1. PostgREST: An RPC JSON interface for any PostgreSQL database (in our case, database served via cardano-db-sync) to provide a RESTful Web Service. The endpoints of PostgREST in itself are essentially the table/functions defined in elected schema via grest config file. You can read more about advanced query syntax using PostgREST API here, but we will provide a simpler view using examples towards the end of the page. It is an easy alternative - with almost no overhead as it directly serves the underlying database as an API, as compared to Cardano GraphQL component (which may often have lags). Some of the other advantages of PostgREST over graphql based projects are also performance, being stateless, 0 overhead, support for JWT / native Postgres DB authentication against the Rest Interface as well.

    2. HAProxy: An easy gateway proxy that automatically provides failover/basic DDoS protection, specify rules management for load balancing, setup multiple frontend/backends, provide easy means to have TLS enabled for public facing instances, etc. You may alter the settings for proxy layer as per your SecOps preferences. This component is optional (eg: if you prefer to expose your PostgREST server itself, you can do so using similar steps below).

    "},{"location":"Build/grest/#setup","title":"Setup gRest services","text":"

    To start with you'd want to ensure your current shell session has access to Postgres credentials, continuing from examples from the above mentioned Sample Postgres deployment guide.

    cd $CNODE_HOME/priv\nPGPASSFILE=$CNODE_HOME/priv/.pgpass\npsql cexplorer\n

    Ensure that you can connect to your Postgres DB fine using above (quit from psql once validated using \\q). As part of guild-deploy.sh execution, you'd find setup-grest.sh file made available in ${CNODE_HOME}/scripts folder, which will help you automate installation of PostgREST, HAProxy as well as brings in latest queries/functions provided via Koios to your instances.

    Warning

    As of now, gRest services are in alpha stage - while can be utilised, please remember there may be breaking changes and every collaborator is expected to work with the team to keep their instances up-to-date using alpha branch.

    Familiarise with the usage options for the setup script , the syntax can be viewed as below:

    cd \"${CNODE_HOME}\"/scripts\n./setup-grest.sh -h\n#\n# Usage: setup-grest.sh [-f] [-i [p][r][m][c][d]] [-u] [-b <branch>]\n# \n# Install and setup haproxy, PostgREST, polling services and create systemd services for haproxy, postgREST and dbsync\n# \n# -f    Force overwrite of all files including normally saved user config sections\n# -i    Set-up Components individually. If this option is not specified, components will only be installed if found missing (eg: -i prcd)\n#     p    Install/Update PostgREST binaries by downloading latest release from github.\n#     r    (Re-)Install Reverse Proxy Monitoring Layer (haproxy) binaries and config\n#     m    Install/Update Monitoring agent scripts\n#     c    Overwrite haproxy, postgREST configs\n#     d    Overwrite systemd definitions\n# -u    Skip update check for setup script itself\n# -q    Run all DB Queries to update on postgres (includes creating grest schema, and re-creating views/genesis table/functions/triggers and setting up cron jobs)\n# -b    Use alternate branch of scripts to download - only recommended for testing/development (Default: master)\n#\n

    To run the setup overwriting all standard deployment tasks from a branch (eg: koios-1.0.9 branch), you may want to use:

    ./setup-grest.sh -f -i prmcd -r -q -b koios-1.0.9\n

    Similarly - if you'd like to re-install all components and force overwrite all configs but not reset cache tables, you may run:

    ./setup-grest.sh -f -i prmcd -q\n

    Another example could be to preserve your config, but only update queries using an alternate branch (eg: let's say you want to try the branch alpha prior to a tagged release). To do so, you may run:

    ./setup-grest.sh -q -b alpha\n

    Please ensure to follow the on-screen instructions, if any (for example restarting deployed services, or updating configs to specify correct target postgres URLs/enable TLS/add peers etc in ${CNODE_HOME}/priv/grest.conf and ${CNODE_HOME}/files/haproxy.cfg).

    The default ports used will make haproxy instance available at port 8053 or 8453 if TLS is enabled (you might want to enable firewall rule to open this port to services you would like to access). If you want to prevent unauthenticated access to grest schema, uncomment the jwt-secret and specify a custom secret-token.

    Reminder

    Once you've successfully deployed the grest instance, it will deploy certain cron jobs that will ensure the relevant cache tables are updated periodically. Until these have finished (especially on first run, it could take an hour or so on mainnet, your instance will likely not pass any tests from grest-poll.sh but that's expected.

    "},{"location":"Build/grest/#tls","title":"Enable TLS on HAProxy","text":"

    In order to enable SSL on your haproxy, all you need to do is edit the file ${CNODE_HOME}/files/haproxy.cfg and update the frontend app section to uncomment ssl bind (and comment normal bind).

    Info

    If you're not familiar with how to configure TLS OR would not like to buy one, you can find tips on how to create a TLS certificate for free via LetsEncrypt using tutorials here. Once you do have a TLS Certificate generated, you need to chain the private key and full chain cert together in a file - /etc/ssl/server.pem - which can be then referenced as below:

    frontend app\n  #bind 0.0.0.0:8053\n  ## If using SSL, comment line above and uncomment line below\n  bind :8453 ssl crt /etc/ssl/server.pem no-sslv3\n  http-request set-log-level silent\n  acl srv_down nbsrv(grest_postgrest) eq 0\n  acl is_wss hdr(Upgrade) -i websocket\n  ...\n
    Restart haproxy service for changes to take effect.

    "},{"location":"Build/grest/#validation","title":"Validation","text":"

    With the setup, you also have a checkstatus.sh script, which will query the Postgres DB instance via haproxy (coming through postgREST), and only show an instance up if the latest block in your DB instance is within 180 seconds.

    Important

    If you'd like to participate in joining to the elastic cluster via Koios, please raise a PR request by editing topology files in this folder to do so!!

    If you were using guild network, you could do a couple of very basic sanity checks as per below:

    1. To query active stake for pool pool1z2ry6kxywgvdxv26g06mdywynvs7jj3uemnxv273mr5esukljsr in epoch 122, we can execute the below:

      curl -d _pool_bech32=pool1z2ry6kxywgvdxv26g06mdywynvs7jj3uemnxv273mr5esukljsr -d _epoch_no=122 -s http://localhost:8053/rpc/pool_active_stake\n## {\"active_stake_sum\" : 19409732875}\n

    2. To check latest owner key(s) for a given pool pool1z2ry6kxywgvdxv26g06mdywynvs7jj3uemnxv273mr5esukljsr, you can execute the below:

      curl -d _pool_bech32=pool1z2ry6kxywgvdxv26g06mdywynvs7jj3uemnxv273mr5esukljsr -s http://localhost:8050/rpc/pool_owners\n## [{\"owner\" : \"stake_test1upx5p04dn3t6dvhfh27744su35vvasgaaq565jdxwlxfq5sdjwksw\"}, {\"owner\" : \"stake_test1uqak99cgtrtpean8wqwp7d9taaqkt9gkkxga05m5azcg27chnzfry\"}]\n

    You may want to explore what all endpoints come out of the box, and test them out, to do so - refer to API documentation for OpenAPI3 documentation. Each endpoint has a pre-filled example for mainnet and connects by default to primary Koios endpoint, allowing you to test endpoints and if needed - grab the curl commands to start testing yourself against your local or remote instances.

    "},{"location":"Build/grest/#participating-in-koios-cluster-as-instance-provider","title":"Participating in Koios Cluster as instance Provider","text":"

    If you're interested to participate in decentralised infrastructure by providing an instance, there are a few additional steps you'd need:

    1. Enable ports for your HAProxy instance (default: 8053), gRest Exporter service (default: 8059) and (optionally) submit API instance (default: 8090) against the monitoring instance (do not need to open these ports to internet) of corresponding network.

    2. Ensure that each of the service above is listening on your public IP address (for instance, submitapi.sh might need to be edited to change HOSTADDR to 0.0.0.0 and restarted).

    3. Create a PR specifying connectivity information to your HAProxy port here.

    4. Make sure to join the telegram discussions group to participate in any discussions, actions, polls for new-features, etc. Feel free to give a shout in the group in case you have trouble following any of the above

    "},{"location":"Build/node-cli/","title":"Node & CLI","text":"

    Reminder !!

    Ensure the Pre-Requisites are in place before you proceed.

    "},{"location":"Build/node-cli/#build-instructions","title":"Build Instructions","text":""},{"location":"Build/node-cli/#clone-the-repository","title":"Clone the repository","text":"

    Execute the below to clone the cardano-node repository to $HOME/git folder on your system:

    cd ~/git\ngit clone https://github.com/input-output-hk/cardano-node\ncd cardano-node\n
    "},{"location":"Build/node-cli/#build-cardano-node","title":"Build Cardano Node","text":"

    You can use the instructions below to build the latest release of cardano-node.

    git fetch --tags --all\ngit pull\n# Replace tag against checkout if you do not want to build the latest released version\ngit checkout $(curl -s https://api.github.com/repos/input-output-hk/cardano-node/releases/latest | jq -r .tag_name)\n\n# Use `-l` argument if you'd like to use system libsodium instead of IOG fork of libsodium while compiling\n$CNODE_HOME/scripts/cabal-build-all.sh\n

    The above would copy the binaries built into ~/.local/bin folder.

    "},{"location":"Build/node-cli/#download-pre-compiled-binary-from-node-release","title":"Download pre-compiled Binary from Node release","text":"

    While certain folks might want to build the node themselves (could be due to OS/arch compatibility, trust factor or customisations), for most it might not make sense to build the node locally. Instead, you can download the binaries using cardano-node release notes, where-in you can find the download links for every version. Once downloaded, you would want to make it available to preferred PATH in your environment (if you're asking how - that'd mean you've skipped skillsets mentioned on homepage).

    "},{"location":"Build/node-cli/#verify","title":"Verify","text":"

    Execute cardano-cli and cardano-node to verify output as below (the exact version and git rev should depend on your checkout tag on github repository):

    cardano-cli version\n# cardano-cli 8.1.2 - linux-x86_64 - ghc-8.10\n# git rev <...>\ncardano-node version\n# cardano-node 8.1.2 - linux-x86_64 - ghc-8.10\n# git rev <...>\n
    "},{"location":"Build/node-cli/#update-port-number-or-pool-name-for-relative-paths","title":"Update port number or pool name for relative paths","text":"

    Before you go ahead with starting your node, you may want to update values for CNODE_PORT in $CNODE_HOME/scripts/env. Note that it is imperative for operational relays and pools to ensure that the port mentioned is opened via firewall to the destination your node is supposed to connect from. Update your network/firewall configuration accordingly. Future executions of guild-deploy.sh will preserve and not overwrite these values.

    CNODEBIN=\"${HOME}/.local/bin/cardano-node\"\nCCLI=\"${HOME}/.local/bin/cardano-cli\"\nCNODE_PORT=6000\nPOOL_NAME=\"GUILD\"\n

    Important

    POOL_NAME is the name of folder that you will use when registering pools and starting node in core mode. This folder would typically contain your hot.skey,vrf.skey and op.cert files required. If the mentioned files are absent, the node will automatically start in a passive mode. Note that in case CNODE_PORT is changed, you'd want to re-do the deployment of systemd service as mentioned later in the guide

    "},{"location":"Build/node-cli/#start-the-node","title":"Start the node","text":"

    To test starting the node in interactive mode, you can use the pre-built script below (cnode.sh) (note that your node logs are being written to $CNODE_HOME/logs folder, you may not see much output beyond Listening on http://127.0.0.1:12798). This script automatically determines whether to start the node as a relay or block producer (if the required pool keys are present in the $CNODE_HOME/priv/pool/<POOL_NAME> as mentioned above). The script contains a user-defined variable CPU_CORES which determines the number of CPU cores the node will use upon start-up:

    ######################################\n# User Variables - Change as desired #\n# Common variables set in env file   #\n######################################\n\n#CPU_CORES=2            # Number of CPU cores cardano-node process has access to (please don't set higher than physical core count, 2-4 recommended)\n
    You can uncomment this and set to the desired number, but be wary not to go above your physical core count.
    cd \"${CNODE_HOME}\"/scripts\n./cnode.sh\n

    Ensure you do not have any errors in the console. To stop the node, hit Ctrl-C - we will start the node as systemd later in the document.

    "},{"location":"Build/node-cli/#modify-the-node-to-p2p-mode","title":"Modify the node to P2P mode","text":"

    Note

    The section below only refer to mainnet, as Guildnet/Preview/Preprod templates already come with P2P as default mode, and do not require steps below

    In case you prefer to start the node in P2P mode (ideally, only on relays), you can do so by replacing the config.json and topology.json files in $CNODE_HOME/files folder. You can find a sample of these two files that can be downloaded using commands below:

    cd \"${CNODE_HOME}\"/files\nmv config.json config.json.bkp_$(date +%s)\nmv topology.json topology.json.bkp_$(date +%s)\ncurl -sL -f \"https://raw.githubusercontent.com/cardano-community/guild-operators/master/files/config-mainnet.p2p.json\" -o config.json\ncurl -sL -f \"https://raw.githubusercontent.com/cardano-community/guild-operators/alpha/files/topology-mainnet.json\" -o topology.json\n

    Once downloaded, you'd want to update config.json (if you want to update any port/path references or change tracers from default) and the topology.json file to include your core/relay nodes in localRoots section (replacing dummy values currently in place with \"127.0.0.1\" address. The P2P topology file provides you few public nodes as a fallback to avoid single point of reliance, being IO provided mainnet nodes. You can also remove/update any additional peers as per your preference.

    Once updated, since you modified the file manually - there is always a chance of human errors (eg: missing comma/quotes). Thus, we would recommend you to start the node interactively once again before proceeding.

    cd \"${CNODE_HOME}\"/scripts\n./cnode.sh\n

    Ensure you do not have any errors in the console. To stop the node, hit Ctrl-C - we will start the node as systemd later in the document.

    Note

    An average pool operator may not require cardano-submit-api at all. Please verify if it is required for your use as mentioned here. If - however - you do run submit-api for accepting sizeable transaction load, you would want to override the default MEMPOOL_BYTES by uncommenting it in cnode.sh.

    "},{"location":"Build/node-cli/#start-the-submit-api","title":"Start the submit-api","text":"

    cardano-submit-api is one of the binaries built as part of cardano-node repository and allows you to submit transactions over a Web API. To run this service interactively, you can use the pre-built script below (submitapi.sh). Make sure to update submitapi.sh script to change listen IP or Port that you'd want to make this service available on.

    cd $CNODE_HOME/scripts\n./submitapi.sh\n

    To stop the process, hit Ctrl-C

    "},{"location":"Build/node-cli/#systemd","title":"Run as systemd service","text":"

    The preferred way to run the node (and submit-api) is through a service manager like systemd. This section explains how to setup a systemd service file.

    1. Deploy as a systemd service Execute the below command to deploy your node as a systemd service (from the respective scripts folder):

    cd $CNODE_HOME/scripts\n./cnode.sh -d\n# Deploying cnode.service as systemd service..\n# cnode.service deployed successfully!!\n\n./submitapi.sh -d\n# Deploying cnode-submit-api.service as systemd service..\n# cnode-submit-api deployed successfully!!\n

    2. Start the service Run below commands to enable automatic start of service on startup and start it.

    sudo systemctl start cnode.service\nsudo systemctl start cnode-submit-api.service\n

    3. Check status and stop/start commands Replace status with stop/start/restart depending on what action to take.

    sudo systemctl status cnode.service\nsudo systemctl status cnode-submit-api.service\n

    Important

    In case you see the node exit unsuccessfully upon checking status, please verify you've followed the transition process correctly as documented below, and that you do not have another instance of node already running. It would help to check your system logs (/var/log/syslog for debian-based and /var/log/messages for Red Hat/CentOS/Fedora systems, you can also check journalctl -f -u <service> to examine startup attempt for services) for any errors while starting node.

    You can use gLiveView to monitor your node that was started as a systemd service.

    cd $CNODE_HOME/scripts\n./gLiveView.sh\n
    "},{"location":"Build/offchain-metadata-tools/","title":"Offchain Metadata Tools","text":"

    Important

    In the Cardano multi-asset era, this project helps you create and submit metadata describing your assets, storing them off-chain.

    "},{"location":"Build/offchain-metadata-tools/#download-pre-built-binaries","title":"Download pre-built binaries","text":"

    Go to input-output-hk/offchain-metadata-tools to download the binaries and place in a directory specified by PATH, e.g. $HOME/.local/bin/.

    "},{"location":"Build/offchain-metadata-tools/#build-instructions","title":"Build Instructions","text":"

    An alternative to pre-built binaries - instructions describe how to build the token-metadata-creator tool but the offchain-metadata-tools repository contains other tools as well. Build the ones needed for your installation.

    "},{"location":"Build/offchain-metadata-tools/#clone-the-repository","title":"Clone the repository","text":"

    Execute the below to clone the offchain-metadata-tools repository to $HOME/git folder on your system:

    cd ~/git\ngit clone https://github.com/input-output-hk/offchain-metadata-tools.git\ncd offchain-metadata-tools/token-metadata-creator\n
    "},{"location":"Build/offchain-metadata-tools/#build-token-metadata-creator","title":"Build token-metadata-creator","text":"

    You can use the instructions below to build token-metadata-creator, same steps can be executed in future to update the binaries (replacing appropriate tag) as well.

    git fetch --tags --all\ngit pull\n# Replace master with appropriate tag if you'd like to avoid compiling against master\ngit checkout master\n$CNODE_HOME/scripts/cabal-build-all.sh\n
    The above would copy the binaries into ~/.local/bin folder.

    "},{"location":"Build/offchain-metadata-tools/#verify","title":"Verify","text":"

    Verify that the tool is executable from anywhere by running:

    token-metadata-creator -h\n
    "},{"location":"Build/wallet/","title":"Wallet","text":"

    !> - An average pool operator may not require cardano-wallet at all. Please verify if it is required for your use as mentioned here.

    Ensure the Pre-Requisites are in place before you proceed.

    "},{"location":"Build/wallet/#build-instructions","title":"Build Instructions","text":"

    Follow instructions below for building the cardano-wallet binary:

    "},{"location":"Build/wallet/#clone-the-repository","title":"Clone the repository","text":"

    Execute the below to clone the cardano-wallet repository to $HOME/git folder on your system:

    cd ~/git\ngit clone https://github.com/input-output-hk/cardano-wallet\ncd cardano-wallet\n
    "},{"location":"Build/wallet/#build-cardano-wallet","title":"Build Cardano Wallet","text":"

    You can use the instructions below to build the latest release of cardano-wallet.

    !> - Note that the latest release of cardano-wallet may not work with the latest release of cardano-node. Please check the compatibility of each cardano-wallet release yourself in the official docs, e.g. https://github.com/input-output-hk/cardano-wallet/releases/latest.

    git fetch --tags --all\ngit pull\n# Replace tag against checkout if you do not want to build the latest released version\ngit checkout $(curl -s https://api.github.com/repos/input-output-hk/cardano-wallet/releases/latest | jq -r .tag_name)\n$CNODE_HOME/scripts/cabal-build-all.sh\n

    The above would copy the binaries into ~/.local/bin folder.

    "},{"location":"Build/wallet/#start-the-wallet","title":"Start the wallet","text":"

    You can run the below to connect to a cardano-node instance that is expected to be already running and the wallet will start syncing.

    cardano-wallet serve /\n    --node-socket $CNODE_HOME/sockets/node0.socket /\n    --mainnet / # if using the testnet flag you also need to specify the testnet shelley-genesis.json file\n--database $CNODE_HOME/priv/wallet\n

    "},{"location":"Build/wallet/#verify-the-wallet-is-handling-requests","title":"Verify the wallet is handling requests","text":"

    cardano-wallet network information\n
    Expected output should be similar to the following
    Ok.\n{\n\"network_tip\": {\n\"time\": \"2021-06-01T17:31:05Z\",\n\"epoch_number\": 269,\n\"absolute_slot_number\": 31002374,\n\"slot_number\": 157574\n},\n\"node_era\": \"mary\",\n\"node_tip\": {\n\"height\": {\n\"quantity\": 5795127,\n\"unit\": \"block\"\n},\n\"time\": \"2021-06-01T17:31:00Z\",\n\"epoch_number\": 269,\n\"absolute_slot_number\": 31002369,\n\"slot_number\": 157569\n},\n\"sync_progress\": {\n\"status\": \"ready\"\n},\n\"next_epoch\": {\n\"epoch_start_time\": \"2021-06-04T21:44:51Z\",\n\"epoch_number\": 270\n}\n}\n

    "},{"location":"Build/wallet/#creatingrestoring-wallet","title":"Creating/Restoring Wallet","text":"

    If you're creating a new wallet, you'd first want to generate a mnemonic for use (see below):

    cardano-wallet recovery-phrase generate\n# false brother typical saddle settle phrase foster sauce ask sunset firm gate service render burger\n
    You can use the above mnemonic to then restore a wallet as per below:
    cardano-wallet wallet create from-recovery-phrase MyWalletName\n

    "},{"location":"Build/wallet/#expected-output","title":"Expected output:","text":"
    Please enter a 15\u201324 word recovery phrase: false brother typical saddle settle phrase foster sauce ask sunset firm gate service render burger\n(Enter a blank line if you do not wish to use a second factor.)\nPlease enter a 9\u201312 word second factor:\nPlease enter a passphrase: **********\nEnter the passphrase a second time: **********\nOk.\n{\n    ...\n}\n
    "},{"location":"Scripts/blockperf/","title":"BlockPerf","text":"

    Reminder !!

    Ensure the Pre-Requisites are in place before you proceed.

    blockPerf.sh is a script to monitor the network propagation of new blocks as seen by the local cardano-node.

    "},{"location":"Scripts/blockperf/#block-propagation-traces","title":"Block propagation traces","text":"

    Although blockPerf can also run on the block producer, it makes the most sense to run it on the upstream relays. There it waits for each new block announced to the relay over the network by its remote peers.

    It looks for the delay times that result

    You can view this data locally as a console stream, or run it as a systemd service in background.

    BlockPerf also sends this data to the TopologyUpdater server, so that there is a possibility to compare this data (similar to sendtip to pooltool). As a contributing operator you get the possibility to see how your own relays compare to other nodes regarding receive quality, delay times and thus performance.

    There is no connection or constraint between the TopologyUpdater Relay subscription and the BlockPerf analysis. BlockPerf is even designed to work outside the cnTools suite.

    The results of these data are a good basis to make optimizations and to evaluate which changes were useful or might by required to improve the performance compared to other relay nodes.

    "},{"location":"Scripts/blockperf/#installation","title":"Installation","text":"

    The script is best run as a background process. This can be accomplished in many ways but the preferred method is to run it as a systemd service. A terminal multiplexer like tmux or screen could also be used but not covered here.

    "},{"location":"Scripts/blockperf/#run-as-service","title":"Run as service","text":"

    Use the deploy-as-systemd.sh script to create a systemd unit file. In this setup the script is started in \"service\" mode. Error/Warn level log output is handled by syslog and end up in the systems standard syslog file, normally /var/log/syslog. journalctl -f -u cnode-tu-blockperf.service can be used to check service output (follow mode).

    Outside the cnTools environment call blockPerf.sh -d to install it as a systemd service.

    "},{"location":"Scripts/blockperf/#console-view","title":"Console view","text":"

    If you run blockPerf local in the console (scripts/blockPerf.sh) , immediately after the appearance of a new block it shows where it came from, how many slots away from the previous block it was, and how many milliseconds the individual steps took.

    Block:.... 6860534\n Slot..... 52833850 (+59s)\n ......... 2022-02-09 09:49:01\n Header... 2022-02-09 09:49:02,780 (+1780 ms)\n Request.. 2022-02-09 09:49:02,780 (+0 ms)\n Block.... 2022-02-09 09:49:02,830 (+50 ms)\n Adopted.. 2022-02-09 09:49:02,900 (+70 ms)\n Size..... 79976 bytes\n delay.... 1.819971868 sec\n From..... 104.xxx.xxx.61:3001\n\nBlock:.... 6860535\n Slot..... 52833857 (+7s)\n ......... 2022-02-09 09:49:08\n Header... 2022-02-09 09:49:08,960 (+960 ms)\n Request.. 2022-02-09 09:49:08,970 (+10 ms)\n Block.... 2022-02-09 09:49:09,020 (+50 ms)\n Adopted.. 2022-02-09 09:49:09,090 (+70 ms)\n Size..... 64950 bytes\n delay.... 1.028341023 sec\n From..... 34.xxx.xxx.15:4001\n
    "},{"location":"Scripts/blockperf/#collaborative-web-view","title":"Collaborative web view","text":"

    A further aim of the blockPerf project is to use the data that individual nodes send to the central TopologyUpdater database to produce graphical visualisations and evaluations that provide the participating node operators with useful insights into their performance compared to all others.

    "},{"location":"Scripts/cncli/","title":"CNCLI","text":"

    Reminder !!

    Ensure the Pre-Requisites are in place before you proceed.

    cncli.sh is a script to download and deploy CNCLI created and maintained by Andrew Westberg. It's a community-based CLI tool written in RUST for low-level cardano-node communication. Usage is optional and no script is dependent on it. The main features include:

    "},{"location":"Scripts/cncli/#installation","title":"Installation","text":"

    cncli.sh script's main functions, sync, leaderlog, validate and PoolTool sendslots/sendtip are not meant to be run manually, but instead deployed as systemd services that run in the background to do the block scraping and validation automatically. Additional commands exist for manual execution to initiate the sqlite db, filling the blocklog DB with all blocks created by the pool known to the blockchain, migration of old cntoolsBlockCollector JSON blocklog, and re-validation of blocks and leaderlogs. See usage output below for a complete list of available commands.

    The script works in tandem with Log Monitor to provide faster adopted status but mainly to catch slots the node is leader for but are unable to create a block for. These are marked as invalid. Blocklog will however work fine without the logMonitor service and CNCLI is able to handle everything except catching invalid blocks.

    1. Run the latest version of guild-deploy.sh with guild-deploy.sh -s c to download and install RUST and CNCLI. IOG fork of libsodium required by CNCLI is automatically compiled by CNCLI build process. If a previous installation is found, RUST and CNCLI will be updated to the latest version.
    2. Run deploy-as-systemd.sh to deploy the systemd services that handle all the work in the background. Six systemd services in total are deployed whereof four are related to CNCLI. See above for the different purposes they serve.
    3. If you want to disable some of the deployed services, run sudo systemctl disable <service>

    4. cnode.service (main cardano-node launcher)

    5. cnode-cncli-sync.service
    6. cnode-cncli-leaderlog.service
    7. cnode-cncli-validate.service
    8. cnode-cncli-ptsendtip.service
    9. cnode-cncli-ptsendslots.service
    10. cnode-logmonitor.service (see Log Monitor)
    "},{"location":"Scripts/cncli/#configuration","title":"Configuration","text":"

    You can override the values in the script at the User Variables section shown below. POOL_ID, POOL_VRF_SKEY and POOL_VRF_VKEY should automatically be detected if POOL_NAME is set in the common env file and can be left commented. PT_API_KEY and POOL_TICKER need to be set in the script if PoolTool sendtip/sendslots are to be used before starting the services. For the rest of the commented values, if the defaults do not provide the right values, uncomment and make adjustments.

    #POOL_ID=\"\"                               # Automatically detected if POOL_NAME is set in env. Required for leaderlog calculation & pooltool sendtip, lower-case hex pool id\n#POOL_VRF_SKEY=\"\"                         # Automatically detected if POOL_NAME is set in env. Required for leaderlog calculation, path to pool's vrf.skey file\n#POOL_VRF_VKEY=\"\"                         # Automatically detected if POOL_NAME is set in env. Required for block validation, path to pool's vrf.vkey file\n#PT_API_KEY=\"\"                            # POOLTOOL sendtip: set API key, e.g \"a47811d3-0008-4ecd-9f3e-9c22bdb7c82d\"\n#POOL_TICKER=\"\"                           # POOLTOOL sendtip: set the pools ticker, e.g. \"TCKR\"\n#PT_HOST=\"127.0.0.1\"                      # POOLTOOL sendtip: connect to a remote node, preferably block producer (default localhost)\n#PT_PORT=\"${CNODE_PORT}\"                  # POOLTOOL sendtip: port of node to connect to (default is CNODE_PORT from the env file)\n#CNCLI_DIR=\"${CNODE_HOME}/guild-db/cncli\" # path to the directory for cncli sqlite db\n#SLEEP_RATE=60                            # CNCLI leaderlog/validate: time to wait until next check (in seconds)\n#CONFIRM_SLOT_CNT=600                     # CNCLI validate: require at least these many slots to have passed before validating\n#CONFIRM_BLOCK_CNT=15                     # CNCLI validate: require at least these many blocks on top of minted before validating\n#TIMEOUT_LEDGER_STATE=300                 # CNCLI leaderlog: timeout in seconds for ledger-state query\n#BATCH_AUTO_UPDATE=N                      # Set to Y to automatically update the script if a new version is available without user interaction\n
    "},{"location":"Scripts/cncli/#run","title":"Run","text":"

    Services are controlled by sudo systemctl <status|start|stop|restart> <service name> All services are configured as child services to cnode.service and as such, when an action is taken against this service it's replicated to all child services. E.g running sudo systemctl start cnode.service will also start all child services.

    Log output is handled by syslog and end up in the systems standard syslog file, normally /var/log/syslog. journalctl -f -u <service> can be used to check service output (follow mode). Other logging configurations are not covered here.

    Recommended workflow to get started with CNCLI blocklog.

    1. Install and deploy services according to Installation section.
    2. Set required user variables according to Configuration section.
    3. (optional) If a previous blocklog db exist created by cntoolsBlockCollector, run this command to migrate json storage to new SQLite DB:
    4. $CNODE_HOME/scripts/cncli.sh migrate <path> where is the location to the directory containing all blocks_.json files.
    5. Start deployed services with:
    6. sudo systemctl start cnode-cncli-sync.service (starts leaderlog, validate & ptsendslots automatically)
    7. sudo systemctl start cnode-logmonitor.service
    8. sudo systemctl start cnode-cncli-ptsendtip.service (optional but recommended)
    9. alternatively restart the main service that will trigger a start of all services with:
    10. sudo systemctl restart cnode.service
    11. Run init command to fill the db with all blocks made by your pool known to the blockchain
    12. $CNODE_HOME/scripts/cncli.sh init
    13. Enjoy full blocklog automation and visit View Blocklog section for instructions on how to show blocks from the blocklog DB.
    14. Usage: cncli.sh [operation <sub arg>]\nScript to run CNCLI, best launched through systemd deployed by 'deploy-as-systemd.sh'\n\nsync        Start CNCLI chainsync process that connects to cardano-node to sync blocks stored in SQLite DB (deployed as service)\nleaderlog   One-time leader schedule calculation for current epoch, then continuously monitors and calculates schedule for coming epochs, 1.5 days before epoch boundary on the mainnet (deployed as service)\n  force     Manually force leaderlog calculation and overwrite even if already done, exits after leaderlog is calculated\nvalidate    Continuously monitor and confirm that the blocks made actually was accepted and adopted by chain (deployed as service)\n  all       One-time re-validation of all blocks in blocklog db\n  epoch     One-time re-validation of blocks in blocklog db for the specified epoch \nptsendtip   Send node tip to PoolTool for network analysis and to show that your node is alive and well with a green badge (deployed as service)\nptsendslots Securely sends PoolTool the number of slots you have assigned for an epoch and validates the correctness of your past epochs (deployed as service)\ninit        One-time initialization adding all minted and confirmed blocks to blocklog\nmigrate     One-time migration from old blocklog (cntoolsBlockCollector) to new format (post cncli)\n  path      Path to the old cntoolsBlockCollector blocklog folder holding json files with blocks created\n
      "},{"location":"Scripts/cncli/#view-blocklog","title":"View Blocklog","text":"

      Best and easiest viewed in CNTools and gLiveView but the blocklog database is a SQLite DB so if you are comfortable with SQL, the sqlite3 command can be used to query the DB.

      Block status

      - Leader    : Scheduled to make block at this slot\n- Ideal     : Expected/Ideal number of blocks assigned based on active stake (sigma)\n- Luck      : Leader slots assigned vs ideal slots for this epoch\n- Adopted   : Block created successfully\n- Confirmed : Block created validated to be on-chain with the certainty set in `cncli.sh` for `CONFIRM_BLOCK_CNT`\n- Missed    : Scheduled at slot but no record of it in CNCLI DB and no other pool has made a block for this slot\n- Ghosted   : Block created but marked as orphaned and no other pool has made a valid block for this slot -> height battle or block propagation issue\n- Stolen    : Another pool has a valid block registered on-chain for the same slot\n- Invalid   : Pool failed to create block, base64 encoded error message can be decoded with `echo <base64 hash> | base64 -d | jq -r`\n
      CNTools

      Open CNTools and select [b] Blocks to open the block viewer. Either select Epoch and enter the epoch you want to see a detailed view for or choose Summary to display blocks for last x epochs.

      If the node was elected to create blocks in the selected epoch it could look something like this:

      Summary
       >> BLOCKS\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nCurrent epoch: 96\n\n+--------+---------------------------+----------------------+--------------------------------------+\n| Epoch  | Leader | Ideal | Luck     | Adopted | Confirmed  | Missed | Ghosted | Stolen | Invalid  |\n+--------+---------------------------+----------------------+--------------------------------------+\n| 96     | 34     | 31.66 | 107.39%  | 18      | 18         | 0      | 0       | 0      | 0        |\n| 95     | 32     | 30.57 | 104.68%  | 32      | 32         | 0      | 0       | 0      | 0        |\n+--------+---------------------------+----------------------+--------------------------------------+\n\n[h] Home | [b] Block View | [i] Info | [*] Refresh\n
      Epoch
       >> BLOCKS\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nCurrent epoch: 96\n\n+---------------------------+----------------------+--------------------------------------+\n| Leader | Ideal | Luck     | Adopted | Confirmed  | Missed | Ghosted | Stolen | Invalid  |\n+---------------------------+----------------------+--------------------------------------+\n| 34     | 31.66 | 107.39%  | 18      | 18         | 0      | 0       | 0      | 0        |\n+---------------------------+----------------------+--------------------------------------+\n\n+-----+------------+----------+---------------------+--------------------------+-------+-------------------------------------------------------------------+\n| #   | Status     | Block    | Slot | SlotInEpoch  | Scheduled At             | Size  | Hash                                                              |\n+-----+------------+----------+---------------------+--------------------------+-------+-------------------------------------------------------------------+\n| 1   | confirmed  | 2043444  | 11142827 | 40427    | 2020-11-16 08:34:03 CET  | 3     | ec216d3fb01e4a3cc3e85305145a31875d9561fa3bbcc6d0ee8297236dbb4115  |\n| 2   | confirmed  | 2044321  | 11165082 | 62682    | 2020-11-16 14:44:58 CET  | 3     | b75c33a5bbe49a74e4b4cc5df4474398bfb10ed39531fc65ec2acc51f89ddce5  |\n| 3   | confirmed  | 2044397  | 11166970 | 64570    | 2020-11-16 15:16:26 CET  | 3     | c1ea37fd72543779b6dab46e3e29e0e422784b5fd6188f828ace9eabcc87088f  |\n| 4   | confirmed  | 2044879  | 11178909 | 76509    | 2020-11-16 18:35:25 CET  | 3     | 35a116cec80c5dc295415e4fc8e6435c562b14a5d6833027006c988706c60307  |\n| 5   | confirmed  | 2046965  | 11232557 | 130157   | 2020-11-17 09:29:33 CET  | 3     | d566e5a1f6a3d78811acab4ae3bdcee6aa42717364f9afecd6cac5093559f466  |\n| 6   | confirmed  | 2047101  | 11235675 | 133275   | 2020-11-17 10:21:31 CET  | 3     | 3a638e01f70ea1c4a660fe4e6333272e6c61b11cf84dc8a5a107b414d1e057eb  |\n| 7   | confirmed  | 2047221  | 11238453 | 136053   | 2020-11-17 11:07:49 CET  | 3     | 843336f132961b94276603707751cdb9a1c2528b97100819ce47bc317af0a2d6  |\n| 8   | confirmed  | 2048692  | 11273507 | 171107   | 2020-11-17 20:52:03 CET  | 3     | 9b3eb79fe07e8ebae163870c21ba30460e689b23768d2e5f8e7118c572c4df36  |\n| 9   | confirmed  | 2049058  | 11282619 | 180219   | 2020-11-17 23:23:55 CET  | 3     | 643396ea9a1a2b6c66bb83bdc589fa19c8ae728d1f1181aab82e8dfe508d430a  |\n| 10  | confirmed  | 2049321  | 11289237 | 186837   | 2020-11-18 01:14:13 CET  | 3     | d93d305a955f40b2298247d44e4bc27fe9e3d1486ef3ef3e73b235b25247ccd7  |\n| 11  | confirmed  | 2049747  | 11299205 | 196805   | 2020-11-18 04:00:21 CET  | 3     | 19a43deb5014b14760c3e564b41027c5ee50e0a252abddbfcac90c8f56dc0245  |\n| 12  | confirmed  | 2050415  | 11316075 | 213675   | 2020-11-18 08:41:31 CET  | 3     | dd2cb47653f3bfb3ccc8ffe76906e07d96f1384bafd57a872ddbab3b352403e3  |\n| 13  | confirmed  | 2050505  | 11318274 | 215874   | 2020-11-18 09:18:10 CET  | 3     | deb834bc42360f8d39eefc5856bb6d7cabb6b04170c842dcbe7e9efdf9dbd2e1  |\n| 14  | confirmed  | 2050613  | 11320754 | 218354   | 2020-11-18 09:59:30 CET  | 3     | bf094f6fde8e8c29f568a253201e4b92b078e9a1cad60706285e236a91ec95ff  |\n| 15  | confirmed  | 2050807  | 11325239 | 222839   | 2020-11-18 11:14:15 CET  | 3     | 21f904346ba0fd2bb41afaae7d35977cb929d1d9727887f541782576fc6a62c9  |\n| 16  | confirmed  | 2050997  | 11330062 | 227662   | 2020-11-18 12:34:38 CET  | 3     | 109799d686fe3cad13b156a2d446a544fde2bf5d0e8f157f688f1dc30f35e912  |\n| 17  | confirmed  | 2051286  | 11336791 | 234391   | 2020-11-18 14:26:47 CET  | 3     | bb1beca7a1d849059110e3d7dc49ecf07b47970af2294fe73555ddfefb9561a8  |\n| 18  | confirmed  | 2051734  | 11348498 | 246098   | 2020-11-18 17:41:54 CET  | 3     | 87940b53c2342999c1ba4e185038cda3d8382891a16878a865f5114f540683de  |\n| 19  | leader     | -        | 11382001 | 279601   | 2020-11-19 03:00:17 CET  | -     | -                                                                 |\n| 20  | leader     | -        | 11419959 | 317559   | 2020-11-19 13:32:55 CET  | -     | -                                                                 |\n| 21  | leader     | -        | 11433174 | 330774   | 2020-11-19 17:13:10 CET  | -     | -                                                                 |\n| 22  | leader     | -        | 11434241 | 331841   | 2020-11-19 17:30:57 CET  | -     | -                                                                 |\n| 23  | leader     | -        | 11435289 | 332889   | 2020-11-19 17:48:25 CET  | -     | -                                                                 |\n| 24  | leader     | -        | 11440314 | 337914   | 2020-11-19 19:12:10 CET  | -     | -                                                                 |\n| 25  | leader     | -        | 11442361 | 339961   | 2020-11-19 19:46:17 CET  | -     | -                                                                 |\n| 26  | leader     | -        | 11443861 | 341461   | 2020-11-19 20:11:17 CET  | -     | -                                                                 |\n| 27  | leader     | -        | 11446997 | 344597   | 2020-11-19 21:03:33 CET  | -     | -                                                                 |\n| 28  | leader     | -        | 11453110 | 350710   | 2020-11-19 22:45:26 CET  | -     | -                                                                 |\n| 29  | leader     | -        | 11455323 | 352923   | 2020-11-19 23:22:19 CET  | -     | -                                                                 |\n| 30  | leader     | -        | 11505987 | 403587   | 2020-11-20 13:26:43 CET  | -     | -                                                                 |\n| 31  | leader     | -        | 11514983 | 412583   | 2020-11-20 15:56:39 CET  | -     | -                                                                 |\n| 32  | leader     | -        | 11516010 | 413610   | 2020-11-20 16:13:46 CET  | -     | -                                                                 |\n| 33  | leader     | -        | 11518958 | 416558   | 2020-11-20 17:02:54 CET  | -     | -                                                                 |\n| 34  | leader     | -        | 11533254 | 430854   | 2020-11-20 21:01:10 CET  | -     | -                                                                 |\n+-----+------------+----------+---------------------+--------------------------+-------+-------------------------------------------------------------------+\n
      gLiveView

      Currently shows a block summary for current epoch. For full block details use CNTools for now. Invalid, missing, ghosted and stolen blocks only shown in case of a non-zero value.

      \u2502--------------------------------------------------------------\u2502\n\u2502 BLOCKS   Leader  | Ideal  | Luck    | Adopted | Confirmed    \u2502\n\u2502          24        27.42    87.53%    1         1            \u2502\n\u2502          08:07:57 until leader XXXXXXXXX.....................\u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n
      "},{"location":"Scripts/cntools-changelog/","title":"Changelog","text":"

      All notable changes to this tool will be documented in this file.

      Whenever you're updating between versions where format/hash of keys have changed , or you're changing networks - it is recommended to Backup your Wallet and Pool folders before you proceed with launching cntools on a fresh network.

      The format is based on Keep a Changelog, and this adheres to Semantic Versioning.

      "},{"location":"Scripts/cntools-changelog/#1102-2023-10-30","title":"[11.0.2] - 2023-10-30","text":""},{"location":"Scripts/cntools-changelog/#fixed","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#1101-2023-10-25","title":"[11.0.1] - 2023-10-25","text":""},{"location":"Scripts/cntools-changelog/#fixed_1","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#1100-2023-07-05","title":"[11.0.0] - 2023-07-05","text":""},{"location":"Scripts/cntools-changelog/#changed","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#1040-2023-06-19","title":"[10.4.0] - 2023-06-19","text":""},{"location":"Scripts/cntools-changelog/#added","title":"Added","text":""},{"location":"Scripts/cntools-changelog/#1031-2023-06-03","title":"[10.3.1] - 2023-06-03","text":""},{"location":"Scripts/cntools-changelog/#fixed_2","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#1030-2023-05-18","title":"[10.3.0] - 2023-05-18","text":""},{"location":"Scripts/cntools-changelog/#added_1","title":"Added","text":""},{"location":"Scripts/cntools-changelog/#1023-2023-04-28","title":"[10.2.3] - 2023-04-28","text":""},{"location":"Scripts/cntools-changelog/#fixed_3","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#1022-2023-04-24","title":"[10.2.2] - 2023-04-24","text":""},{"location":"Scripts/cntools-changelog/#fixed_4","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#1021-2023-04-04","title":"[10.2.1] - 2023-04-04","text":""},{"location":"Scripts/cntools-changelog/#fixed_5","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#1020-2023-03-13","title":"[10.2.0] - 2023-03-13","text":""},{"location":"Scripts/cntools-changelog/#fixed_6","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#changed_1","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#1011-2023-02-07","title":"[10.1.1] - 2023-02-07","text":""},{"location":"Scripts/cntools-changelog/#fixed_7","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#1010-2023-01-17","title":"[10.1.0] - 2023-01-17","text":""},{"location":"Scripts/cntools-changelog/#added_2","title":"Added","text":""},{"location":"Scripts/cntools-changelog/#changed_2","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#fixed_8","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#1005-2022-11-07","title":"[10.0.5] - 2022-11-07","text":""},{"location":"Scripts/cntools-changelog/#changed_3","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#1004-2022-08-26","title":"[10.0.4] - 2022-08-26","text":""},{"location":"Scripts/cntools-changelog/#changed_4","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#fixed_9","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#1003-2022-08-16","title":"[10.0.3] - 2022-08-16","text":""},{"location":"Scripts/cntools-changelog/#fixed_10","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#1002-2022-08-13","title":"[10.0.2] - 2022-08-13","text":""},{"location":"Scripts/cntools-changelog/#fixed_11","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#1001-2022-07-14","title":"[10.0.1] - 2022-07-14","text":""},{"location":"Scripts/cntools-changelog/#changed_5","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#fixed_12","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#1000-2022-06-28","title":"[10.0.0] - 2022-06-28","text":""},{"location":"Scripts/cntools-changelog/#added_3","title":"Added","text":""},{"location":"Scripts/cntools-changelog/#changed_6","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#910-2022-05-11","title":"[9.1.0] - 2022-05-11","text":""},{"location":"Scripts/cntools-changelog/#changed_7","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#9010-2022-05-03","title":"[9.0.10] - 2022-05-03","text":""},{"location":"Scripts/cntools-changelog/#fixed_13","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#909-2022-03-14","title":"[9.0.9] - 2022-03-14","text":""},{"location":"Scripts/cntools-changelog/#changed_8","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#908-2022-03-07","title":"[9.0.8] - 2022-03-07","text":""},{"location":"Scripts/cntools-changelog/#changed_9","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#907-2022-03-02","title":"[9.0.7] - 2022-03-02","text":""},{"location":"Scripts/cntools-changelog/#fixed_14","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#906-2022-02-20","title":"[9.0.6] - 2022-02-20","text":""},{"location":"Scripts/cntools-changelog/#fixed_15","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#905-2022-02-16","title":"[9.0.5] - 2022-02-16","text":""},{"location":"Scripts/cntools-changelog/#fixed_16","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#904-2022-02-14","title":"[9.0.4] - 2022-02-14","text":""},{"location":"Scripts/cntools-changelog/#fixed_17","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#903-2022-02-01","title":"[9.0.3] - 2022-02-01","text":""},{"location":"Scripts/cntools-changelog/#added_4","title":"Added","text":""},{"location":"Scripts/cntools-changelog/#902-2022-01-22","title":"[9.0.2] - 2022-01-22","text":""},{"location":"Scripts/cntools-changelog/#changed_10","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#901-2022-01-17","title":"[9.0.1] - 2022-01-17","text":""},{"location":"Scripts/cntools-changelog/#changed_11","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#900-2022-01-10","title":"[9.0.0] - 2022-01-10","text":""},{"location":"Scripts/cntools-changelog/#changed_12","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#882-2021-12-28","title":"[8.8.2] - 2021-12-28","text":""},{"location":"Scripts/cntools-changelog/#fixed_18","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#881-2021-12-18","title":"[8.8.1] - 2021-12-18","text":""},{"location":"Scripts/cntools-changelog/#fixed_19","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#880-2021-12-15","title":"[8.8.0] - 2021-12-15","text":""},{"location":"Scripts/cntools-changelog/#fixed_20","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#873-2021-11-30","title":"[8.7.3] - 2021-11-30","text":""},{"location":"Scripts/cntools-changelog/#fixed_21","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#872-2021-11-08","title":"[8.7.2] - 2021-11-08","text":""},{"location":"Scripts/cntools-changelog/#changed_13","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#871-2021-11-04","title":"[8.7.1] - 2021-11-04","text":""},{"location":"Scripts/cntools-changelog/#fixed_22","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#870-2021-10-05","title":"[8.7.0] - 2021-10-05","text":""},{"location":"Scripts/cntools-changelog/#changed_14","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#866-2021-09-26","title":"[8.6.6] - 2021-09-26","text":""},{"location":"Scripts/cntools-changelog/#fixed_23","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#865-2021-09-15","title":"[8.6.5] - 2021-09-15","text":""},{"location":"Scripts/cntools-changelog/#fixed_24","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#864-2021-09-14","title":"[8.6.4] - 2021-09-14","text":""},{"location":"Scripts/cntools-changelog/#fixed_25","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#863-2021-08-31","title":"[8.6.3] - 2021-08-31","text":""},{"location":"Scripts/cntools-changelog/#fixed_26","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#862-2021-08-30","title":"[8.6.2] - 2021-08-30","text":""},{"location":"Scripts/cntools-changelog/#fixed_27","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#861-2021-08-27","title":"[8.6.1] - 2021-08-27","text":""},{"location":"Scripts/cntools-changelog/#changed_15","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#860-2021-08-27","title":"[8.6.0] - 2021-08-27","text":""},{"location":"Scripts/cntools-changelog/#changed_16","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#8415-2021-07-15","title":"[8.4.15] - 2021-07-15","text":""},{"location":"Scripts/cntools-changelog/#changed_17","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#8414-2021-07-14","title":"[8.4.14] - 2021-07-14","text":""},{"location":"Scripts/cntools-changelog/#fixed_28","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#8413-2021-07-08","title":"[8.4.13] - 2021-07-08","text":""},{"location":"Scripts/cntools-changelog/#changed_18","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#8412-2021-06-28","title":"[8.4.12] - 2021-06-28","text":""},{"location":"Scripts/cntools-changelog/#fixed_29","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#8411-2021-06-25","title":"[8.4.11] - 2021-06-25","text":""},{"location":"Scripts/cntools-changelog/#changed_19","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#8410-2021-06-15","title":"[8.4.10] - 2021-06-15","text":""},{"location":"Scripts/cntools-changelog/#fixed_30","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#849-2021-06-15","title":"[8.4.9] - 2021-06-15","text":""},{"location":"Scripts/cntools-changelog/#changed_20","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#846-2021-06-04","title":"[8.4.6] - 2021-06-04","text":""},{"location":"Scripts/cntools-changelog/#fixed_31","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#845-2021-05-31","title":"[8.4.5] - 2021-05-31","text":""},{"location":"Scripts/cntools-changelog/#fixed_32","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#844-2021-05-19","title":"[8.4.4] - 2021-05-19","text":""},{"location":"Scripts/cntools-changelog/#fixed_33","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#843-2021-05-17","title":"[8.4.3] - 2021-05-17","text":""},{"location":"Scripts/cntools-changelog/#fixed_34","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#842-2021-05-16","title":"[8.4.2] - 2021-05-16","text":""},{"location":"Scripts/cntools-changelog/#fixed_35","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#841-2021-05-16","title":"[8.4.1] - 2021-05-16","text":""},{"location":"Scripts/cntools-changelog/#changed_21","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#840-2021-05-16","title":"[8.4.0] - 2021-05-16","text":""},{"location":"Scripts/cntools-changelog/#added_5","title":"Added","text":""},{"location":"Scripts/cntools-changelog/#830-2021-05-15","title":"[8.3.0] - 2021-05-15","text":""},{"location":"Scripts/cntools-changelog/#added_6","title":"Added","text":""},{"location":"Scripts/cntools-changelog/#changed_22","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#fixed_36","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#822-2021-05-02","title":"[8.2.2] - 2021-05-02","text":""},{"location":"Scripts/cntools-changelog/#fixed_37","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#821-2021-04-26","title":"[8.2.1] - 2021-04-26","text":""},{"location":"Scripts/cntools-changelog/#changed_23","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#820-2021-04-18","title":"[8.2.0] - 2021-04-18","text":""},{"location":"Scripts/cntools-changelog/#added_7","title":"Added","text":""},{"location":"Scripts/cntools-changelog/#changed_24","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#816-2021-04-14","title":"[8.1.6] - 2021-04-14","text":""},{"location":"Scripts/cntools-changelog/#changed_25","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#fixed_38","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#815-2021-04-09","title":"[8.1.5] - 2021-04-09","text":""},{"location":"Scripts/cntools-changelog/#fixed_39","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#814-2021-04-05","title":"[8.1.4] - 2021-04-05","text":""},{"location":"Scripts/cntools-changelog/#changed_26","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#813-2021-04-01","title":"[8.1.3] - 2021-04-01","text":""},{"location":"Scripts/cntools-changelog/#fixed_40","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#812-2021-03-31","title":"[8.1.2] - 2021-03-31","text":""},{"location":"Scripts/cntools-changelog/#changed_27","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#811-2021-03-30","title":"[8.1.1] - 2021-03-30","text":""},{"location":"Scripts/cntools-changelog/#fixed_41","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#810-2021-03-26","title":"[8.1.0] - 2021-03-26","text":""},{"location":"Scripts/cntools-changelog/#added_8","title":"Added","text":""},{"location":"Scripts/cntools-changelog/#changed_28","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#fixed_42","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#802-2021-03-15","title":"[8.0.2] - 2021-03-15","text":""},{"location":"Scripts/cntools-changelog/#fixed_43","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#801-2021-03-05","title":"[8.0.1] - 2021-03-05","text":""},{"location":"Scripts/cntools-changelog/#fixed_44","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#800-2021-02-28","title":"[8.0.0] - 2021-02-28","text":""},{"location":"Scripts/cntools-changelog/#added_9","title":"Added","text":""},{"location":"Scripts/cntools-changelog/#changed_29","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#fixed_45","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#716-2021-02-10","title":"[7.1.6] - 2021-02-10","text":""},{"location":"Scripts/cntools-changelog/#715-2021-02-03","title":"[7.1.5] - 2021-02-03","text":""},{"location":"Scripts/cntools-changelog/#changed_30","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#fixed_46","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#714-2021-02-01","title":"[7.1.4] - 2021-02-01","text":""},{"location":"Scripts/cntools-changelog/#fixed_47","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#713-2021-01-30","title":"[7.1.3] - 2021-01-30","text":""},{"location":"Scripts/cntools-changelog/#fixed_48","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#711-2021-01-29","title":"[7.1.1] - 2021-01-29","text":""},{"location":"Scripts/cntools-changelog/#changed_31","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#710-2021-01-29","title":"[7.1.0] - 2021-01-29","text":""},{"location":"Scripts/cntools-changelog/#changed_32","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#702-2021-01-17","title":"[7.0.2] - 2021-01-17","text":""},{"location":"Scripts/cntools-changelog/#changed_33","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#fixed_49","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#701-2021-01-13","title":"[7.0.1] - 2021-01-13","text":""},{"location":"Scripts/cntools-changelog/#changed_34","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#700-2021-01-11","title":"[7.0.0] - 2021-01-11","text":"

      Though mostly unchanged in the user interface, this is a major update with most of the code re-written/touched in the back-end. Only the most noticeable changes added to changelog.

      "},{"location":"Scripts/cntools-changelog/#added_10","title":"Added","text":""},{"location":"Scripts/cntools-changelog/#changed_35","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#fixed_50","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#631-2020-12-14","title":"[6.3.1] - 2020-12-14","text":""},{"location":"Scripts/cntools-changelog/#fixed_51","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#630-2020-12-03","title":"[6.3.0] - 2020-12-03","text":""},{"location":"Scripts/cntools-changelog/#changed_36","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#fixed_52","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#621-2020-11-28","title":"[6.2.1] - 2020-11-28","text":""},{"location":"Scripts/cntools-changelog/#changed_37","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#620-alpha-branch","title":"[6.2.0] - (alpha branch)","text":""},{"location":"Scripts/cntools-changelog/#added_11","title":"Added","text":""},{"location":"Scripts/cntools-changelog/#changed_38","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#fixed_53","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#610-2020-10-22","title":"[6.1.0] - 2020-10-22","text":""},{"location":"Scripts/cntools-changelog/#added_12","title":"Added","text":""},{"location":"Scripts/cntools-changelog/#changed_39","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#fixed_54","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#603-2020-10-16","title":"[6.0.3] - 2020-10-16","text":""},{"location":"Scripts/cntools-changelog/#fixed_55","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#602-2020-10-16","title":"[6.0.2] - 2020-10-16","text":""},{"location":"Scripts/cntools-changelog/#fixed_56","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#601-2020-10-16","title":"[6.0.1] - 2020-10-16","text":""},{"location":"Scripts/cntools-changelog/#fixed_57","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#600-2020-10-15","title":"[6.0.0] - 2020-10-15","text":"

      This is a major release with a lot of changes. It is highly recommended that you familiarise yourself with the usage for Hybrid or Online v/s Offline mode on a testnet environment before doing it on production. Please visit https://cardano-community.github.io/guild-operators/upgrade for details.

      "},{"location":"Scripts/cntools-changelog/#added_13","title":"Added","text":""},{"location":"Scripts/cntools-changelog/#changed_40","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#removed","title":"Removed","text":""},{"location":"Scripts/cntools-changelog/#fixed_58","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#500-2020-07-20","title":"[5.0.0] - 2020-07-20","text":""},{"location":"Scripts/cntools-changelog/#added_14","title":"Added","text":""},{"location":"Scripts/cntools-changelog/#changed_41","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#removed_1","title":"Removed","text":""},{"location":"Scripts/cntools-changelog/#fixed_59","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#400-2020-07-13","title":"[4.0.0] - 2020-07-13","text":""},{"location":"Scripts/cntools-changelog/#added_15","title":"Added","text":""},{"location":"Scripts/cntools-changelog/#changed_42","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#300-2020-07-12","title":"[3.0.0] - 2020-07-12","text":""},{"location":"Scripts/cntools-changelog/#added_16","title":"Added","text":""},{"location":"Scripts/cntools-changelog/#changed_43","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#fixed_60","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#200-2020-07-12","title":"[2.0.0] - 2020-07-12","text":""},{"location":"Scripts/cntools-changelog/#added_17","title":"Added","text":""},{"location":"Scripts/cntools-changelog/#changed_44","title":"Changed","text":""},{"location":"Scripts/cntools-changelog/#removed_2","title":"Removed","text":""},{"location":"Scripts/cntools-changelog/#fixed_61","title":"Fixed","text":""},{"location":"Scripts/cntools-changelog/#100-2020-07-07","title":"[1.0.0] - 2020-07-07","text":""},{"location":"Scripts/cntools-common/","title":"Common Tasks","text":"

      Important

      Familiarize yourself with the Online workflow of creating wallets and pools on the Preview/Preprod/Guild network first. You can then move on to test the Offline Workflow. The Offline workflow means that the private keys never touch the Online node. When comfortable with both the online and offline CNTools workflow, it's time to deploy what you learnt on the mainnet.

      This chapter describes some common use-cases for wallet and pool creation when running CNTools in Online mode. CNTools contains much more functionality not described here.

      Create Wallet

      A wallet is needed for pledge and to pay for pool registration fee.

      1. Choose [w] Wallet and you will be presented with the following menu:
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> WALLET\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Wallet Management\n\n ) New         - create a new wallet\n ) Import      - import a Daedalus/Yoroi 24/25 mnemonic or Ledger/Trezor HW wallet\n ) Register    - register a wallet on chain\n ) De-Register - De-Register (retire) a registered wallet\n ) List        - list all available wallets in a compact view\n ) Show        - show detailed view of a specific wallet\n ) Remove      - remove a wallet\n ) Decrypt     - remove write protection and decrypt wallet\n ) Encrypt     - encrypt wallet keys and make all files immutable\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Select Wallet Operation\n\n  [n] New\n  [i] Import\n  [r] Register\n  [z] De-Register\n  [l] List\n  [s] Show\n  [x] Remove\n  [d] Decrypt\n  [e] Encrypt\n  [h] Home\n
      2. Choose [n] New to create a new wallet. [i] Import can also be used to import a Daedalus/Yoroi based 15 or 24 word wallet seed
      3. Give the wallet a name
      4. CNTools will give you the wallet address. For example:
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> WALLET >> NEW\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nName of new wallet: Test\n\nNew Wallet         : Test\nAddress            : addr_test1qpq5qjr774cyc6kxcwp060k4t4hwp42q43v35lmcg3gcycu5uwdwld5yr8m8fgn7su955zf5qahtrgljqfjfa4nr8jfsj4alxk\nEnterprise Address : addr_test1vpq5qjr774cyc6kxcwp060k4t4hwp42q43v35lmcg3gcyccuxhdka\n\nYou can now send and receive Ada using the above addresses.\nNote that Enterprise Address will not take part in staking.\nWallet will be automatically registered on chain if you\nchoose to delegate or pledge wallet when registering a stake pool.\n
      5. Send some money to this wallet. Either through the faucet or have a friend send you some.
      Import Daedalus/Yoroi/HW Wallet

      The Import feature of CNTools is originally based on this guide from Ilap.

      If you would like to use Import function to import a Daedalus/Yoroi based 15 or 24 word wallet seed, please ensure that cardano-address and bech32 bineries are available in your $PATH environment variable:

      bech32 --version\n1.1.0\n\ncardano-address --version\n3.5.0\n

      If the version is not as per above, please run the latest guild-deploy.sh from here and rebuild cardano-node as instructed here.

      To import a Daedalus/Yoroi wallet to CNTools, open CNTools and select the [w] Wallet option, and then select the [i] Import, the following menu will appear:

      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> WALLET >> IMPORT\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Wallet Import\n\n ) Mnemonic  - Daedalus/Yoroi 24 or 25 word mnemonic\n ) HW Wallet - Ledger/Trezor hardware wallet\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Select Wallet operation\n\n  [m] Mnemonic\n  [w] HW Wallet\n  [h] Home\n

      Note

      You can import Hardware wallet using [w] HW Wallet above, but please note that before you are able to use hardware wallet in CNTools, you need to ensure you can detect your hardware device at OS level using cardano-hw-cli

      Select the wallet you want to import, for Daedalus / Yoroi wallets select [m] Mnemonic:

      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> WALLET >> IMPORT >> MNEMONIC\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nName of imported wallet: TEST\n\n24 or 15 word mnemonic(space separated):\n
      Give your wallet a name (in this case 'TEST'), and enter your mnemonic phrase. Please ensure that you **READ* through the complete notes presented by CNTools before proceeding.

      Create Pool

      Create the necessary pool keys.

      1. From the main menu select [p] Pool
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> POOL\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Pool Management\n\n ) New      - create a new pool\n ) Register - register created pool on chain using a stake wallet (pledge wallet)\n ) Modify   - change pool parameters and register updated pool values on chain\n ) Retire   - de-register stake pool from chain in specified epoch\n ) List     - a compact list view of available local pools\n ) Show     - detailed view of specified pool\n ) Rotate   - rotate pool KES keys\n ) Decrypt  - remove write protection and decrypt pool\n ) Encrypt  - encrypt pool cold keys and make all files immutable\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Select Pool Operation\n\n  [n] New\n  [r] Register\n  [m] Modify\n  [x] Retire\n  [l] List\n  [s] Show\n  [o] Rotate\n  [d] Decrypt\n  [e] Encrypt\n  [h] Home\n
      2. Select [n] New to create a new pool
      3. Give the pool a name. In our case, we call it TEST. The result should look something like this:
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> POOL >> NEW\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nPool Name: TEST\n\nPool: TEST\nID (hex)    : 8d5a3510f18ce241115da38a1b2419ed82d308599c16e98caea1b4c0\nID (bech32) : pool134dr2y833n3yzy2a5w9pkfqeakpdxzzenstwnr9w5x6vqtnclue\n
      Register Pool

      Register the pool on-chain.

      1. From the main menu select [p] Pool
      2. Select [r] Register
      3. Select the pool you just created
      4. CNTools will give you prompts to set pledge, margin, cost, metadata, and relays. Enter values that are useful to you.

      Make sure you set your pledge low enough to insure your funds in your wallet will cover pledge plus pool registration fees.

      1. Select wallet to use as pledge wallet, Test in our case. As this is a newly created wallet, you will be prompted to continue with wallet registration. When complete and if successful, both wallet and pool will be registered on-chain.

      It will look something like this:

      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> POOL >> REGISTER\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nOnline mode  -  The default mode to use if all keys are available\n\nHybrid mode  -  1) Go through the steps to build a transaction file\n                2) Copy the built tx file to an offline node\n                3) Sign it using 'Sign Tx' with keys on offline node\n                   (CNTools started in offline mode '-o' without node connection)\n                4) Copy the signed tx file back to the online node and submit using 'Submit Tx'\n\nSelected value: [o] Online\n\n# Select pool\nSelected pool: TEST\n\n# Pool Parameters\npress enter to use default value\n\nPledge (in Ada, default: 50,000):\nMargin (in %, default: 7.5):\nCost (in Ada, minimum: 340, default: 340):\n\n# Pool Metadata\n\nEnter Pool's JSON URL to host metadata file - URL length should be less than 64 chars (default: https://foo.bat/poolmeta.json):\n\nEnter Pool's Name (default: TEST):\nEnter Pool's Ticker , should be between 3-5 characters (default: TEST):\nEnter Pool's Description (default: No Description):\nEnter Pool's Homepage (default: https://foo.com):\n\nOptionally set an extended metadata URL?\nSelected value: [n] No\n{\n  \"name\": \"TEST\",\n  \"ticker\": \"TEST\",\n  \"description\": \"No Description\",\n  \"homepage\": \"https://foo.com\",\n  \"nonce\": \"1613146429\"\n}\n\nPlease host file /opt/cardano/guild/priv/pool/TEST/poolmeta.json as-is at https://foo.bat/poolmeta.json\n\n# Pool Relay Registration\nSelected value: [d] A or AAAA DNS record (single)\nEnter relays's DNS record, only A or AAAA DNS records: relay.foo.com\nEnter relays's port: 6000\nAdd more relay entries?\nSelected value: [n] No\n\n# Select main owner/pledge wallet (normal CLI wallet)\nSelected wallet: Test (100,000.000000 Ada)\nWallet Test3 not registered on chain\n\nWaiting for new block to be created (timeout = 600 slots, 600s)\nINFO: press any key to cancel and return (won't stop transaction)\n\nOwner #1 : Test added!\n\nRegister a multi-owner pool (you need to have stake.vkey of any additional owner in a seperate wallet folder under $CNODE_HOME/priv/wallet)?\nSelected value: [n] No\n\nUse a separate rewards wallet from main owner?\nSelected value: [n] No\n\nWaiting for new block to be created (timeout = 600 slots, 600s)\nINFO: press any key to cancel and return (won't stop transaction)\n\nPool TEST successfully registered!\nOwner #1      : Test\nReward Wallet : Test\nPledge        : 50,000 Ada\nMargin        : 7.5 %\nCost          : 340 Ada\n\nUncomment and set value for POOL_NAME in ./env with 'TEST'\n\nINFO: Total balance in 1 owner/pledge wallet(s) are: 99,497.996518 Ada\n

      1. As mentioned in the above output: Uncomment and set value for POOL_NAME in ./env with 'TEST' (in our case, the POOL_NAME is TEST). The cnode.sh script will automatically detect whether the files required to run as a block producing node are present in the $CNODE_HOME/priv/pool/<POOL_NAME> directory.
      Rotate KES Keys

      The node runs with an operational certificate, generated using the KES hot key. For security reasons, the protocol asks to re-generate (or rotate) your KES key once reaching expiry. On mainnet, this expiry is in 62 cycles of 18 hours (thus, to ask for rotation quarterly), after which your node will not be able to forge valid blocks unless rotated. To be able to rotate KES keys, your cold keys files (cold.skey, cold.vkey and cold.counter) need to be present on the machine where you run CNTools to rotate your KES key.

      1. To Rotate KES keys and generate the operational certificate - op.cert.

      2. From the main menu select [p] Pool

      3. Select [o] Rotate
      4. Select the pool you just created

      The output should look like:

      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> POOL >> ROTATE KES\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nSelect pool to rotate KES keys on\nSelected pool: TEST\n\nPool KES keys successfully updated\nNew KES start period  : 240\nKES keys will expire  : 302 - 2021-09-04 11:24:31 UTC\n\nRestart your pool node for changes to take effect\n\npress any key to return to home menu\n
      1. Start or restart your cardano-node. If deployed as a systemd service as shown here, you can run sudo systemctl restart cnode.
      2. Ensure the node is running as a block producing (core) node.

      You can use gLiveView - the output at the top should say > Cardano Node - (Core - Guild).

      Alternatively, you can check the node logs in $CNODE_HOME/logs/ to see whether the node is performing leadership checks (TraceStartLeadershipCheck, TraceNodeIsNotLeader, etc.)

      "},{"location":"Scripts/cntools/","title":"Overview","text":"

      Important

      Koios CNTools is like a swiss army knife for pool operators to simplify typical operations regarding their wallet keys and pool management. Please note that this tool only aims to simplify usual tasks for its users, but it should NOT act as an excuse to skip understanding how to manually work through things or basics of Linux operations. The skills highlighted on the home page are paramount for a stake pool operator, and so is the understanding of configuration files and network. Please ensure you've read and understood the disclaimers before proceeding.

      Visit the Changelog section to see progress and current release.

      "},{"location":"Scripts/cntools/#overview","title":"Overview","text":"

      The tool consist of three files.

      In addition to the above files, there is also a dependency on the common env file. CNTools connects to your node through the configuration in the env file located in the same directory as the script. Customize env and cntools.sh files to your needs.

      Additionally, CNTools can integrate and enable optional functionalities based on external components:

      See CNCLI and Log Monitor sections for more details.

      Koios CNTools can operate in following modes:

      "},{"location":"Scripts/cntools/#download-and-update","title":"Download and Update","text":"

      The update functionality is provided from within CNTools. In case of breaking changes, please follow the prompts post-upgrade. If stuck, it's always best to re-run the latest guild-deploy.sh before proceeding.

      If you have not updated in a while, it is possible that you might come from a release with breaking changes. If so, please be sure to check out the upgrade instructions.

      "},{"location":"Scripts/cntools/#navigation","title":"Navigation","text":"

      The scripts menu supports both arrow key navigation and shortcut key selection. The character within the square brackets is the shortcut to press for quick navigation. For other selections like wallet and pool menu that don't contain shortcuts, there is a third way to navigate. Key pressed is compared to the first character of the menu option and if there is a match the selection jumps to this location. A handy way to quickly navigate a large menu.

      "},{"location":"Scripts/cntools/#hardware-wallet","title":"Hardware Wallet","text":"

      CNTools includes hardware wallet support since version 7.0.0 through Vacuumlabs cardano-hw-cli application. Initialize and update firmware/app on the device to the latest version before usage following the manufacturer instructions.

      To enable hardware support run guild-deploy.sh -s w. This downloads and installs Vacuumlabs cardano-hw-cli including udev configuration. When a new version of Vacuumlabs cardano-hw-cli is released, run guild-deploy.sh -s w again to update. For additional runtime options, run guild-deploy.sh -h.

      Ledger Trezor "},{"location":"Scripts/cntools/#offline-workflow","title":"Offline Workflow","text":"

      CNTools can be run in online and offline mode. At a very high level, for working with offline devices, remember that you need to use CNTools in an online node to generate a staging transaction for the desired type of transaction, and then move the staging transaction to an offline node to sign (authorize) using the signing keys on your offline node - and then bring back the signed transaction to the online node for submission to the chain.

      For the offline workflow, all the wallet and pool keys should be kept on the offline node. The backup function in CNTools has an option to create a backup without private keys (sensitive signing keys) to be transferred to online node. All other files are included in the backup to be transferred to the online node.

      Keys excluded from backup when created without private keys: Wallet - payment.skey, stake.skey Pool - cold.skey

      Note that setting up an offline server requires good SysOps background (you need to be aware of how to set up your server with offline mirror repository, how to transfer files across and be fairly familiar with the disk layout presented in the documentation). The guild-deploy.sh in its current state is not expected to run on an offline machine. Essentially, you simply need the cardano-cli, bech32, cardano-address binaries in your $PATH, OS level dependency packages [jq, coreutils, pkgconfig, gcc-c++ and bc ], and perhaps a copy from your online cnode directory (to ensure you have the right genesis/config files on your offline server). We strongly recommend you to familiarise yourself with the workflow on the preview / preprod / guild networks first, before attempting on mainnet.

      Example workflow for creating a wallet and pool:

      sequenceDiagram Note over Offline: Create/Import a wallet Note over Offline: Create a new pool Note over Offline: Rotate KES keys to generate op.cert Note over Offline: Create a backup w/o private keys Offline->>Online: Transfer backup to online node Note over Online: Fund the wallet base address with enough Ada Note over Online: Register wallet using ' Wallet \u00bb Register ' in hybrid mode Online->>Offline: Transfer built tx file back to offline node Note over Offline: Use ' Transaction >> Sign ' with payment.skey from wallet to sign transaction Offline->>Online: Transfer signed tx back to online node Note over Online: Use ' Transaction >> Submit ' to send signed transaction to blockchain Note over Online: Register pool in hybrid mode loop Offline-->Online: Repeat steps to sign and submit built pool registration transaction end Note over Online: Verify that pool was successfully registered with ' Pool \u00bb Show ' Online mode

      To start CNTools in Online (advanced) Mode, execute the script from the $CNODE_HOME/scripts/ directory:

      cd $CNODE_HOME/scripts\n./cntools.sh -a\n

      You should get a screen that looks something like this:

      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> Koios CNTools vX.X.X - Guild - CONNECTED <<\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Main Menu    Telegram Announcement / Support channel: t.me/CardanoKoios/9759\n\n ) Wallet      - create, show, remove and protect wallets\n ) Funds       - send, withdraw and delegate\n ) Pool        - pool creation and management\n ) Transaction - Sign and Submit a cold transaction (hybrid/offline mode)\n ) Blocks      - show core node leader schedule & block production statistics\n ) Backup      - backup & restore of wallet/pool/config\n ) Advanced    - Developer and advanced features: metadata, multi-assets, ...\n ) Refresh     - reload home screen content\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n                                                  Epoch 276 - 3d 19:08:27 until next\n What would you like to do?                                         Node Sync: 12 :)\n\n  [w] Wallet\n  [f] Funds\n  [p] Pool\n  [t] Transaction\n  [b] Blocks\n  [u] Update\n  [z] Backup & Restore\n  [a] Advanced\n  [r] Refresh\n  [q] Quit\n
      Offline mode

      To start CNTools in Offline Mode, execute the script from the $CNODE_HOME/scripts/ directory using the -o flag:

      cd $CNODE_HOME/scripts\n./cntools.sh -o\n

      The main menu header should let you know that node is started in offline mode:

      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> Koios CNTools vX.X.X - Guild - OFFLINE <<\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Main Menu    Telegram Announcement / Support channel: t.me/CardanoKoios/9759\n\n ) Wallet      - create, show, remove and protect wallets\n ) Funds       - send, withdraw and delegate\n ) Pool        - pool creation and management\n ) Transaction - Sign and Submit a cold transaction (hybrid/offline mode)\n\n ) Backup      - backup & restore of wallet/pool/config\n\n ) Refresh     - reload home screen content\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n                                                  Epoch 276 - 3d 19:03:46 until next\n What would you like to do?\n\n  [w] Wallet\n  [f] Funds\n  [p] Pool\n  [t] Transaction\n  [z] Backup & Restore\n  [r] Refresh\n  [q] Quit\n

      "},{"location":"Scripts/env/","title":"Common env","text":"

      A common environment file called env is sourced by most scripts in the Guild Operators repository. This file holds common variables and functions needed by other scripts. There are several benefits to this, not having to specify duplicate settings and being able to reuse functions decreasing the risk of misconfiguration and inconsistency.

      "},{"location":"Scripts/env/#installation","title":"Installation","text":"

      env file is downloaded together with the rest of the scripts when Pre-Requisites if followed and located in the $CNODE_HOME/scripts/ directory. The file is also automatically downloaded/updated by some of the individual scripts if missing, like cntools.sh, gLiveView.sh and topologyUpdater.sh. All custom changes in User Variables section are untouched on updates unless a forced overwrite is selected when running guild-deploy.sh.

      "},{"location":"Scripts/env/#configuration","title":"Configuration","text":"

      Most variables can be left commented to use the automatically detected or default value. But there are some that need to be set as explained below.

      Take your time and look through the different variables and their explanations and decide if you need/want to change the default setting. For a default deployment using guild-deploy.sh, the CNODE_PORT (all installs) and POOL_NAME (only block producer) should be the only variables needed to be set.

      ######################################\n# User Variables - Change as desired #\n# Leave as is if unsure              #\n######################################\n\n#CCLI=\"${HOME}/.local/bin/cardano-cli\"                  # Override automatic detection of path to cardano-cli executable\n#CNCLI=\"${HOME}/.local/bin/cncli\"                       # Override automatic detection of path to cncli executable (https://github.com/AndrewWestberg/cncli)\n#CNODE_HOME=\"/opt/cardano/cnode\"                        # Override default CNODE_HOME path (defaults to /opt/cardano/cnode)\nCNODE_PORT=6000                                         # Set node port\n#CONFIG=\"${CNODE_HOME}/files/config.json\"               # Override automatic detection of node config path\n#SOCKET=\"${CNODE_HOME}/sockets/node0.socket\"            # Override automatic detection of path to socket\n#TOPOLOGY=\"${CNODE_HOME}/files/topology.json\"           # Override default topology.json path\n#LOG_DIR=\"${CNODE_HOME}/logs\"                           # Folder where your logs will be sent to (must pre-exist)\n#DB_DIR=\"${CNODE_HOME}/db\"                              # Folder to store the cardano-node blockchain db\n#UPDATE_CHECK=\"Y\"                                       # Check for updates to scripts, it will still be prompted before proceeding (Y|N).\n#TMP_DIR=\"/tmp/cnode\"                                   # Folder to hold temporary files in the various scripts, each script might create additional subfolders\n#EKG_HOST=127.0.0.1                                     # Set node EKG host IP\n#EKG_PORT=12788                                         # Override automatic detection of node EKG port\n#PROM_HOST=127.0.0.1                                    # Set node Prometheus host IP\n#PROM_PORT=12798                                        # Override automatic detection of node Prometheus port\n#EKG_TIMEOUT=3                                          # Maximum time in seconds that you allow EKG request to take before aborting (node metrics)\n#CURL_TIMEOUT=10                                        # Maximum time in seconds that you allow curl file download to take before aborting (GitHub update process)\n#BLOCKLOG_DIR=\"${CNODE_HOME}/guild-db/blocklog\"         # Override default directory used to store block data for core node\n#BLOCKLOG_TZ=\"UTC\"                                      # TimeZone to use when displaying blocklog - https://en.wikipedia.org/wiki/List_of_tz_database_time_zones\n#SHELLEY_TRANS_EPOCH=208                                # Override automatic detection of shelley epoch start, e.g 208 for mainnet\n#TG_BOT_TOKEN=\"\"                                        # Uncomment and set to enable telegramSend function. To create your own BOT-token and Chat-Id follow guide at:\n#TG_CHAT_ID=\"\"                                          # https://cardano-community.github.io/guild-operators/Scripts/sendalerts\n#USE_EKG=\"N\"                                            # Use EKG metrics from the node instead of Promethus. Promethus metrics(default) should yield slightly better performance\n#TIMEOUT_LEDGER_STATE=300                               # Timeout in seconds for querying and dumping ledger-state\n#IP_VERSION=4                                           # The IP version to use for push and fetch, valid options: 4 | 6 | mix (Default: 4)\n\n#WALLET_FOLDER=\"${CNODE_HOME}/priv/wallet\"              # Root folder for Wallets\n#POOL_FOLDER=\"${CNODE_HOME}/priv/pool\"                  # Root folder for Pools\n# Each wallet and pool has a friendly name and subfolder containing all related keys, certificates, ...\n#POOL_NAME=\"\"                                           # Set the pool's name to run node as a core node (the name, NOT the ticker, ie folder name)\n\n#WALLET_PAY_VK_FILENAME=\"payment.vkey\"                  # Standardized names for all wallet related files\n#WALLET_PAY_SK_FILENAME=\"payment.skey\"\n#WALLET_HW_PAY_SK_FILENAME=\"payment.hwsfile\"\n#WALLET_PAY_ADDR_FILENAME=\"payment.addr\"\n#WALLET_BASE_ADDR_FILENAME=\"base.addr\"\n#WALLET_STAKE_VK_FILENAME=\"stake.vkey\"\n#WALLET_STAKE_SK_FILENAME=\"stake.skey\"\n#WALLET_HW_STAKE_SK_FILENAME=\"stake.hwsfile\"\n#WALLET_STAKE_ADDR_FILENAME=\"reward.addr\"\n#WALLET_STAKE_CERT_FILENAME=\"stake.cert\"\n#WALLET_STAKE_DEREG_FILENAME=\"stake.dereg\"\n#WALLET_DELEGCERT_FILENAME=\"delegation.cert\"\n\n#POOL_ID_FILENAME=\"pool.id\"                             # Standardized names for all pool related files\n#POOL_HOTKEY_VK_FILENAME=\"hot.vkey\"\n#POOL_HOTKEY_SK_FILENAME=\"hot.skey\"\n#POOL_COLDKEY_VK_FILENAME=\"cold.vkey\"\n#POOL_COLDKEY_SK_FILENAME=\"cold.skey\"\n#POOL_OPCERT_COUNTER_FILENAME=\"cold.counter\"\n#POOL_OPCERT_FILENAME=\"op.cert\"\n#POOL_VRF_VK_FILENAME=\"vrf.vkey\"\n#POOL_VRF_SK_FILENAME=\"vrf.skey\"\n#POOL_CONFIG_FILENAME=\"pool.config\"\n#POOL_REGCERT_FILENAME=\"pool.cert\"\n#POOL_CURRENT_KES_START=\"kes.start\"\n#POOL_DEREGCERT_FILENAME=\"pool.dereg\"\n\n#ASSET_FOLDER=\"${CNODE_HOME}/priv/asset\"                # Root folder for Multi-Assets containing minted assets and subfolders for Policy IDs\n#ASSET_POLICY_VK_FILENAME=\"policy.vkey\"                 # Standardized names for all multi-asset related files\n#ASSET_POLICY_SK_FILENAME=\"policy.skey\"\n#ASSET_POLICY_SCRIPT_FILENAME=\"policy.script\"           # File extension '.script' mandatory\n#ASSET_POLICY_ID_FILENAME=\"policy.id\"\n
      "},{"location":"Scripts/gliveview/","title":"gLiveView","text":"

      Reminder !!

      Ensure the Pre-Requisites are in place before you proceed.

      Koios gLiveView is a local monitoring tool to use in addition to remote monitoring tools like Prometheus/Grafana, Zabbix or IOG's RTView. This is especially useful when moving to a systemd deployment - if you haven't done so already - as it offers an intuitive UI to monitor the node status.

      The tool is independent from other files and can run as a standalone utility that can be stopped/started without affecting the status of cardano-node.

      "},{"location":"Scripts/gliveview/#download","title":"Download","text":"

      If you've used guild-deploy.sh, you can skip this part, as this is already set up for you. The tool relies on the common env configuration file. To get current epoch blocks, the logMonitor.sh script is needed (and can be combined with CNCLI). This is optional and Koios gLiveView will function without it.

      Note

      For those who follow the folder structure in this repo and do not wish to run guild-deploy.sh, you can run the below in $CNODE_HOME/scripts folder

      To download the script:

      curl -s -o gLiveView.sh https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/gLiveView.sh\ncurl -s -o env https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/env\nchmod 755 gLiveView.sh\n
      "},{"location":"Scripts/gliveview/#configuration-startup","title":"Configuration & Startup","text":"

      For most setups, it's enough to set CNODE_PORT in the env file. The rest of the variables should automatically be detected. If required, modify User Variables in env and gLiveView.sh to suit your environment (if folder structure you use is different). This should lead you to a stage where you can now start running ./gLiveView.sh in the folder you downloaded the script (the default location would be $CNODE_HOME/scripts). Note that the script is smart enough to automatically detect when you're running as a Core or Relay and will show fields accordingly.

      The tool can be run in legacy mode with only standard ASCII characters for terminals with trouble displaying the box-drawing characters. Run ./gLiveView.sh -h to show available command-line parameters or permanently set it directly in script.

      A sample output from both core and relay together with peer analysis:

      Core

      Relay

      Peer Analysis

      "},{"location":"Scripts/gliveview/#upper-main-section","title":"Upper main section","text":"

      Displays live metrics from cardano-node gathered through the nodes EKG/Prometheus(env setting) endpoint.

      "},{"location":"Scripts/gliveview/#core-section","title":"Core section","text":"

      If the node is run as a core, identified by the 'forge-about-to-lead' parameter, a second core section is displayed.

      "},{"location":"Scripts/gliveview/#peer-analysis","title":"Peer analysis","text":"

      A manual peer analysis can be triggered by key press p. A latency test will be done on incoming and outgoing connections to the node.

      Note

      Note that with P2P enabled, an incoming/outgoing connection can be reused for bi-directional traffic. There isnt a way to distinctly identify the P2P peer's direction yet for a given IP.

      Outgoing connections(peers in topology file), ping type used is done in this order: 1. cncli - If available, this gives the most accurate measure as it checks the entire handshake process against the remote peer. 2. ss - Sends a TCP SYN package to ping the remote peer on the cardano-node port. Should give ~100% success rate. 2. tcptraceroute - Same as ss. 3. ping - fallback method using ICMP ping against IP. Will only work if firewall of remote peer accept ICMP traffic.

      For incoming connections, only ICMP ping is used as remote peer port is unknown. It's not uncommon to see many undetermined peers for incoming connections as it's a good security practice to disable ICMP in firewall.

      Once the analysis is finished, it will display the RTTs (return-trip times) for the peers and group them in ranges 0-50, 50-100, 100-200, 200<. The analysis is NOT live. Press [h] Home to go back to default view or [i] Info to show in-script help text. Up and Down arrow keys is used to select incoming or outgoing detailed list of IPs and their RTT value. Left (<) and Right (>) arrow keys can be used to navigate the pages in the selected list.

      "},{"location":"Scripts/gliveview/#troubleshootingcustomisations","title":"Troubleshooting/Customisations","text":"

      In case you run into trouble while running the script, you might want to edit env & gLiveView.sh and look at User Variables section. You can override the values if the automatic detection do not provide the right information, but we would appreciate if you could also notify us by raising an issue against the GitHub repository:

      gLiveView.sh

      ######################################\n# User Variables - Change as desired #\n######################################\n\nNODE_NAME=\"Cardano Node\"                  # Change your node's name prefix here, keep at or below 19 characters!\nREFRESH_RATE=2                            # How often (in seconds) to refresh the view (additional time for processing and output may slow it down)\nLEGACY_MODE=false                         # (true|false) If enabled unicode box-drawing characters will be replaced by standard ASCII characters\nRETRIES=3                                 # How many attempts to connect to running Cardano node before erroring out and quitting\nPEER_LIST_CNT=6                           # Number of peers to show on each in/out page in peer analysis view\nTHEME=\"dark\"                              # dark  = suited for terminals with a dark background\n# light = suited for terminals with a bright background\nENABLE_IP_GEOLOCATION=\"Y\"                 # Enable IP geolocation on outgoing and incoming connections using ip-api.com\n

      "},{"location":"Scripts/itnrewards/","title":"Itnrewards","text":""},{"location":"Scripts/itnrewards/#concept","title":"Concept","text":"

      To claim rewards earned during the Incentivized TestNet the private and public keys from ITN must be converted to Shelley stake keys. A script called itnRewards.sh has been created to guide you through the process of converting the keys and to create a CNTools compatible wallet from were the rewards can be withdrawn.

      graph TB A([\"itnRewards.sh\"]) A --x B([\"ITN Owner skey (ed25519[e]_sk)..\"]) --x D([\"cardano-cli shelley key convert-itn-key ..\"]) A --x C([\"ITN Owner vkey (ed25519_pk)..\"]) --x D D --x E([\"Stake skey/vkey\"]) --x L A --x F([\"cardano-cli shelley ..\"]) F --x G([\"Payment skey/vkey/addr\"]) --x L F --x H([\"Reward addr\"]) --x L F --x I([\"Base addr\"]) --x L L[CNTools Wallet] ;"},{"location":"Scripts/itnrewards/#steps","title":"Steps","text":""},{"location":"Scripts/itnwitness/","title":"Itnwitness","text":"

      Disclaimer

      Currently this is to protect the existing pools from the ITN who already have a delegator base against spoofing - to avoid scammers building on results of ITN from known pools. There would be a solution in the future for Mainnet nodes too - but it doesn't apply to those in its current form.

      "},{"location":"Scripts/itnwitness/#concept","title":"Concept","text":"

      Due to the expected ticker spoofing attack for pools that were famous during ITN, some of the community members have proposed an interim solution to verify the legitimacy of a pool for delegators. You can check the high-level workflow below:

      graph TB A(\"ITN Owner skey (ed25519/ed25519e) ..\") --x C([\"jcli key sign ..\"]) B(\"Haskell Pool ID (pool.id) ..\") --x C C --x D(\"Signature key, (pool.sig) ..\") E(\"ITN Owner vkey (ed25519_pk) ..\") --x F(\"Extended Metadata JSON (poolmeta_extended.json) ..\") D --x F F --x G(\"Pool Meta JSON (poolmeta.json) ..\") ;"},{"location":"Scripts/itnwitness/#steps","title":"Steps","text":"

      The actual implementation is pretty straightforward, we will keep it brisk - as we assume ones participating are fairly familiar with jcli usage.

      If the process is approved to appear for wallets, we may consider providing easier alternatives. If any queries about the process, or any additions please create a git issue/PR against guild repository - to capture common queries and update instructions/help text where appropriate.

      "},{"location":"Scripts/itnwitness/#sample-output-of-json-files-generated","title":"Sample output of JSON files generated","text":"
      {\n\"itn\": {\n\"owner\": \"ed25519_pk1...\",\n\"witness\": \"ed25519_sig1...\"\n}\n}\n
      "},{"location":"Scripts/logmonitor/","title":"Log Monitor","text":"

      Reminder !!

      Ensure the Pre-Requisites are in place before you proceed.

      logMonitor.sh is a general purpose JSON log monitoring script for traces created by cardano-node. Currently, it looks for traces related to leader slots and block creation but other uses could be added in the future.

      "},{"location":"Scripts/logmonitor/#block-traces","title":"Block traces","text":"

      For the core node (block producer) the logMonitor.sh script can be run to monitor the JSON log file created by cardano-node for traces related to leader slots and block creation.

      For optimal coverage, it's best run together with CNCLI scripts as they provide different functionalities. Together, they create a complete picture of blocks assigned, created, validated or invalidated due to node issues.

      "},{"location":"Scripts/logmonitor/#installation","title":"Installation","text":"

      The script is best run as a background process. This can be accomplished in many ways but the preferred method is to run it as a systemd service. A terminal multiplexer like tmux or screen could also be used but not covered here.

      Use the deploy-as-systemd.sh script to create a systemd unit file (deployed together with CNCLI). Log output is handled by syslog and end up in the systems standard syslog file, normally /var/log/syslog. journalctl -f -u cnode-logmonitor.service can be used to check service output (follow mode). Other logging configurations are not covered here.

      "},{"location":"Scripts/logmonitor/#view-blocklog","title":"View Blocklog","text":"

      Best viewed in CNTools or gLiveView. See CNCLI for example output.

      "},{"location":"Scripts/sendalerts/","title":"Sendalerts","text":"

      !> Ensure the Pre-Requisites are in place before you proceed.

      This section describes the ways in which CNTools can send important messages to the operator.

      "},{"location":"Scripts/sendalerts/#telegram-alerts","title":"Telegram alerts","text":"

      If known but unwanted errors occur on your node, or if characteristic values indicate an unusual status , CNTools can send you Telegram alert messages.

      To do this, you first have to activate your own bot and link it to your own Telegram user. Here is an explanation of how this works:

      1. Open Telegram and search for \"botfather\".

      2. Write him your wish: /newbot.

      3. Define a name for your bot, such as cntools_[POOLNAME]_alerts.

      4. Botfather will confirm the creation of your bot by giving you the unique bot access token. Keep it safe and private.

      5. Now send at least one direct message to your new bot.

      6. Open this URL in your browser by using your own, just created bot access token:

      https://api.telegram.org/bot<your-access-token>/getUpdates\n
      1. the result is a JSON. Look for the value of result.message.chat.id. This chat id should be a large integer number.

      This is all you need to enable your Telegram alerts in the scripts/env file - uncomment and add the chat ID to the TG_CHAT_ID user variable in the env file:

      ...\nTG_CHAT_ID=\"<YOUR_TG_CHAT_ID>\"\n...  \n

      "},{"location":"Scripts/topologyupdater/","title":"Topology Updater","text":"

      Reminder !!

      The topologyUpdater shell script must be executed on the relay node as a cronjob exactly every 60 minutes. After 4 consecutive requests (3 hours) the node is considered a new relay node in listed in the topology file. If the node is turned off, it's automatically delisted after 3 hours.

      "},{"location":"Scripts/topologyupdater/#download","title":"Download and Configure","text":"

      If you have run guild-deploy.sh, this should already be available in your scripts folder and make this step unnecessary.

      Before the updater can make a valid request to the central topology service, it must query the current tip/blockNo from the well-synced local node. It connects to your node through the configuration in the script as well as the common env configuration file. Customize these files for your needs.

      To download topologyUpdater.sh manually, you can execute the commands below and test executing Topology Updater once (it's OK if first execution gives back an error):

      cd $CNODE_HOME/scripts\ncurl -s -o topologyUpdater.sh https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/topologyUpdater.sh\ncurl -s -o env https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/env\nchmod 750 topologyUpdater.sh\n./topologyUpdater.sh\n

      "},{"location":"Scripts/topologyupdater/#modify","title":"Examine and modify the variables within topologyUpdater.sh script","text":"

      Out of the box, the scripts might come with some assumptions, that may or may not be valid for your environment. One of the common changes as an SPO would be to the complete CUSTOM_PEERS section as below to include your local relays/BP nodes (described in the How do I add my own nodes section), and any additional peers you'd like to be always available at minimum. Please do take time to update the variables in User Variables section in env & topologyUpdater.sh:

      ### topologyUpdater.sh\n\n######################################\n# User Variables - Change as desired #\n######################################\n\nCNODE_HOSTNAME=\"CHANGE ME\"                                # (Optional) Must resolve to the IP you are requesting from\nCNODE_VALENCY=1                                           # (Optional) for multi-IP hostnames\nMAX_PEERS=15                                              # Maximum number of peers to return on successful fetch\n#CUSTOM_PEERS=\"None\"                                      # Additional custom peers to (IP,port[,valency]) to add to your target topology.json\n# eg: \"10.0.0.1,3001|10.0.0.2,3002|relays.mydomain.com,3003,3\"\n#BATCH_AUTO_UPDATE=N                                      # Set to Y to automatically update the script if a new version is available without user interaction\n

      Any customisations you add above, will be saved across future guild-deploy.sh executions, unless you specify the -f flag to overwrite completely.

      "},{"location":"Scripts/topologyupdater/#deploy","title":"Deploy the script","text":"

      systemd service The script can be deployed as a background service in different ways but the recommended and easiest way if guild-deploy.sh was used, is to utilize the deploy-as-systemd.sh script to setup and schedule the execution. This will deploy both push & fetch service files as well as timers for a scheduled 60 min node alive message and cnode restart at the user set interval (default: 24 hours) when running the deploy script.

      systemctl list-timers can be used to to check the push and restart service schedule.

      crontab job Another way to deploy the topologyUpdater.sh script is as a crontab job. Add the script to be executed once per hour at a minute of your choice (eg xx:25 o'clock in the example below). The example below will handle both the fetch and push in a single call to the script once an hour. In addition to the below crontab job for topologyUpdater, it's expected that you also add a scheduled restart of the relay node to pick up a fresh topology file fetched by topologyUpdater script with relays that are alive and well.

      25 * * * * /opt/cardano/cnode/scripts/topologyUpdater.sh\n
      "},{"location":"Scripts/topologyupdater/#logs","title":"Logs","text":"

      You can check the last result of push message in logs/topologyUpdater_lastresult.json. If deployed as systemd service, use sudo journalctl -u <service> to check output from service.

      If one of the parameters is outside the allowed ranges, invalid or missing the returned JSON will tell you what needs to be fixed.

      Don't try to execute the script more often than once per hour. It's completely useless and may lead to a temporary blacklisting.

      "},{"location":"Scripts/topologyupdater/#why-does-my-topology-file-only-contain-iog-peers","title":"Why does my topology file only contain IOG peers?","text":"

      Each subscribed node (4 consecutive requests) is allowed to fetch a subset of other nodes to prove loyalty/stability of the relay. Until reaching this point, your fetch calls will only return IOG peers combined with any custom peers added in USER VARIABLES section of topologyUpdater.sh script

      The engineers of cardano-node network stack suggested to use around 20 peers. More peers create unnecessary and unwanted system load and delays.

      In its default setting, topologyUpdater returns a list of 15 remote peers.

      Note that the change in topology is only effective upon restart of your node. Make sure you account for some scheduled restarts on your relays, to help onboard newer relays onto the network (as described in the systemd section).

      "},{"location":"Scripts/topologyupdater/#how-do-i-add-my-own-relaysstatic-nodes-in-addition-to-dynamic-list-generated-by-topologyupdater","title":"How do I add my own relays/static nodes in addition to dynamic list generated by topologyUpdater?","text":"

      Most of the Stake Pool Operators may have few preferences (own relays, close friends, etc) that they would like to add to their topology by default. This is where the CUSTOM_PEERS variable in topologyUpdater.sh comes in. You can add a list of peers in the format of: hostname/IP:port[:valency] here and the output topology.json formed will already include the custom peers that you supplied. Every custom peer is defined in the form [address]:[port] and optional :[valency] (if not specified, the valency defaults to 1). Multiple custom peers are separated by |. An example of a valid CUSTOM_PEERS variable would be:

      CUSTOM_PEERS=\"foo.bar.io,3001,2|198.175.21.197,6001|36.233.3.89,6000\n
      The list above would add three custom peers with the specified addresses and ports, with the first one additionally specifying the optional valency parameter (in this case 2).

      "},{"location":"Scripts/topologyupdater/#how-are-the-peers-for-my-topology-file-selected","title":"How are the peers for my topology file selected?","text":"

      We calculate the distance on the Earth's surface from your node's IP to all subscribed peers. We then order the peers by distance (closest first) and start by selecting one peer. We then skip some, pick the next, skip, pick, skip, pick ... until we reach the end of the list (furthest away). The number of skipped records is calculated in a way to have the desired number of peers at the end.

      Every requesting node has its personal distance to all other nodes.

      We assume this should result in a well-distributed and interconnected peering network.

      "},{"location":"docker/build/","title":"Build","text":""},{"location":"docker/build/#intro","title":"Intro","text":"

      \ud83d\udca1 Docker containers are the fastest way to run a Cardano node in both \"Relay\" and \"Block-Producing\" (Pool) mode.

      "},{"location":"docker/build/#how-to-build","title":"How to build","text":"
      docker build -t cardanocommunity/cardano-node:latest - < dockerfile_bin\n
      "},{"location":"docker/build/#for-windows-users","title":"For Windows Users","text":"

      With Powershell on Windows, you can run docker by typing the following command:

      Get-Content dockerfile_bin  | docker build -t guild-operators/cardano-node:latest -\n
      "},{"location":"docker/build/#see-also","title":"See also","text":"

      Docker Tips

      Docker Official Docs

      "},{"location":"docker/docker/","title":"Overview","text":"

      Running your own Cardano node has never been so fast and easy.

      But first, a kind reminder to the security aspects of running docker containers.

      "},{"location":"docker/docker/#external-resources","title":"External resources","text":""},{"location":"docker/docker/#built-in-cardano-software","title":"\ud83d\udd14 Built-in Cardano software","text":""},{"location":"docker/docker/#built-in-tools","title":"\ud83d\udd14 Built-in tools","text":""},{"location":"docker/docker/#docker-splash-screen","title":"Docker Splash screen","text":""},{"location":"docker/docker/#cntools","title":"Cntools","text":""},{"location":"docker/docker/#gliveview","title":"gLiveView","text":""},{"location":"docker/docker/#gliveview-peers-analyzer","title":"gLiveView Peers analyzer","text":""},{"location":"docker/docker/#cncli","title":"CNCLI","text":""},{"location":"docker/docker/#strategy","title":"Guild Operators Docker strategy ( mainnet/ preview / preprod / guild)","text":"

      Modular docker images based on Debian.

      Based on the Guild's work we decided to build the Cardano Node images in 3 stages:

      "},{"location":"docker/docker/#additional-docs","title":"Additional docs","text":"

      If you prefer to build the images your own than you can check:

      "},{"location":"docker/docker/#port-mapping","title":"Port mapping","text":"

      The dockerfiles are located in ./files/docker/

      Node Ports Wallet Ports Flavor Node (6000) Wallet (8090) Debian Prometheus (12798) Prometheus (12798) EKG (12781)"},{"location":"docker/run/","title":"Run","text":""},{"location":"docker/run/#os-requirements","title":"OS Requirements","text":" Private mode Public mode

      Note

      1) --entrypoint=bash # This option won't start the node's container but only the OS running (the node software wont actually start, you'll need to manually execute entrypoint.sh ), ready to get in (trough the command docker exec -it < container name or hash > /bin/bash) and play/explore around with it in command line mode. 2) all guild tools env variable can be used to start a new container using custom values by using the \"-e\" option. 3) CPU and RAM and SHared Memory allocation option for the container can be used when you start the container (i.e. --shm-size or --memory or --cpus official docker resource docs)

      "},{"location":"docker/run/#use-cases","title":"Use Cases","text":"
      docker run --init -dit\n--name <YourCName>\n--security-opt=no-new-privileges\n-e NETWORK=mainnet\n-v <your_custom_path>:/opt/cardano/cnode/priv\n-v <your_custom_db_path>:/opt/cardano/cnode/db\ncardanocommunity/cardano-node\n
      "},{"location":"docker/run/#use-cases_1","title":"Use Cases:","text":"
      docker run --init -dit\n--name <YourCName>\n--security-opt=no-new-privileges\n-e NETWORK=mainnet\n-p 6000:6000\n-v <your_custom_path>:/opt/cardano/cnode/priv\n-v <your_custom_db_path>:/opt/cardano/cnode/db\ncardanocommunity/cardano-node\n
      docker run --init -dit\n--name <YourCName>\n--security-opt=no-new-privileges\n-e NETWORK=mainnet\n-e CONFIG=/opt/cardano/cnode/priv/<your own configuration files>.yml\n-p 6000:6000\n-v <your_custom_path>:/opt/cardano/cnode/priv\n-v <your_custom_db_path>:/opt/cardano/cnode/db\ncardanocommunity/cardano-node\n
      "},{"location":"docker/security/","title":"Security","text":""},{"location":"docker/security/#docker-security-best-practices","title":"Docker Security best practices","text":""},{"location":"docker/security/#intro","title":"Intro","text":"

      On the security front, Docker developers are faced with different types of security attacks such as:

      Docker containers are now being exploited to covertly mine for cryptocurrency, marking a shift from ransomware to cryptocurrency malware. As with all things in security, also Docker security is a moving target \u2014 so it\u2019s helpful to have access to up-to-date information, including experience-based best practices, for securing your containerized environments.

      "},{"location":"docker/security/#here-below-some-key-concepts","title":"Here below some key concepts:","text":"
      1. Use a Third-Party Security Tool Docker allows you to use containers from untrusted public repositories, which increases the need to scrutinize whether the container was created securely and whether it is free of any corrupt or malicious files. For this, use a multi-purpose security tool that gives extensive dev-to-production security controls.(keep reading below)

      2. Manage Vulnerability It is best to have a sound vulnerability management program that has multiple checks throughout the container lifecycle. Vulnerability management should incorporate quality gates to detect access issues and weaknesses for a potential exploit from dev-to-production environments.

      3. Monitor and Audit Container Activity It is vital to monitor the container ecosystem and detect suspicious activity. Container monitoring activities provide real-time reports that can help you react promptly to a security breach.

      4. Enable Docker Content Trust Docker Content Trustis a new feature incorporated into Docker 1.8. It is disabled by default, but once enabled, allows you to verify the integrity, authenticity, and publication date of all Docker images from the Docker Hub Registry.

      5. Use Docker Bench for Security You should consider Docker Bench for Security as your must-use script. Once the script is run, you will notice a lot of information regarding configuration best practices for deploying Docker containers that can be used to further secure your Docker server and containers.

      6. Resource Utilization To reduce performance impacts and denial-of-service attacks, it is a good practice to implement limits on the system resources that the containers can consume. If, for example, a web server is compromised, it helps to limit the impact to the other processes that are running on a host.

      7. RBAC RBAC is role-based access control. If you have multiple users accessing you enviroment, this is a must-have. It can be quite expensive to implement but portainer makes it super easy.

      "},{"location":"docker/security/#security-docker-best-practices","title":"Security Docker best practices:","text":""},{"location":"docker/security/#the-guild-docker-images-are-not-using-all-the-following-tips-due-to-functional-purpose","title":"The Guild Docker images are not using all the following tips due to functional purpose","text":"

      Guild tips:

      Some more general tips:

      "},{"location":"docker/security/#notes","title":"Notes:","text":""},{"location":"docker/tips/","title":"Tips","text":""},{"location":"docker/tips/#how-to-run-a-cardano-node-with-docker","title":"How to run a Cardano Node with Docker","text":"

      With this quick guide you will be able to run a cardano node in seconds and also have the powerfull Koios SPO scripts built-in.

      "},{"location":"docker/tips/#how-to-operate-interactively-within-the-container","title":"How to operate interactively within the container","text":"

      Once executed the container as a deamon with attached tty you are then able to enter the container by using the flag -dit .

      While if you have a hook within the container console, use the following command (change CN with your container name):

      docker exec -it CN bash 

      This command will bring you within the container bash env ready to use the Koios tools.

      "},{"location":"docker/tips/#docker-flags-explained","title":"Docker flags explained","text":"
      \"docker build\" options explained:\n -t : option is to \"tag\" the image you can name the image as you prefer as long as you maintain the references between dockerfiles.\n\n\"docker run\" options explained:\n -d : for detach the container\n -i : interactive enabled -t : terminal session enabled\n -e : set an Env Variable\n -p : set exposed ports (by default if not specified the ports will be reachable only internally)\n--hostname : Container's hostname\n --name : Container's name\n
      "},{"location":"docker/tips/#custom-container-with-your-own-cfg","title":"Custom container with your own cfg","text":"
      docker run --init -itd  \n-name Relay                                   # Optional (recommended for quick access): set a name for your newly created container.\n-p 9000:6000                                  # Optional: to expose the internal container's port (6000) to the host <IP> port 9000\n-e NETWORK=mainnet                            # Mandatory: mainnet / preprod / guild-mainnet / guild\n--security-opt=no-new-privileges              # Option to prevent privilege escalations\n-v <YourNetPath>:/opt/cardano/cnode/sockets   # Optional: useful to share the node socket with other containers\n-v <YourCfgPath>:/opt/cardano/cnode/priv      # Optional: if used has to contain all the sensitive keys needed to run a node as core\n-v <YourDBbk>:/opt/cardano/cnode/db           # Optional: if not set a fresh DB will be downloaded from scratch\ncardanocommunity/cardano-node:latest          # Mandatory: image to run\n

      Note

      To be able to use the CNTools encryption key feature you need to manually change in \"cntools.config\" ENABLE_CHATTR to \"true\" and not use the --security-opt=no-new-privileges docker run option.

      "},{"location":"docker/tips/#docker-cli-managment","title":"Docker CLI managment","text":""},{"location":"docker/tips/#official","title":"Official","text":""},{"location":"docker/tips/#un-official-docker-managment-cli-tool","title":"Un-Official Docker managment cli tool","text":""},{"location":"docker/tips/#docker-backups-and-restores","title":"Docker backups and restores","text":"

      The docker container has an optional backup and restore functionality that can be used to backup the /opt/cardano/cnode/db directory. To have the backup persist longer than the countainer, the backup directory should be mounted as a volume.

      [!NOTE] The backup and restore functionality is disabled by default.

      [!WARNING] Make sure adequate space exists on the host as the backup will double the space consumed by the database.

      "},{"location":"docker/tips/#creating-a-backup","title":"Creating a Backup","text":"

      When the container is started with the ENABLE_BACKUP environment variable set to Y the container will automatically create a backup in the /opt/cardano/cnode/backup/$NETWORK-db directory. The backup will be created when the container is started and if the backup directory is smaller than the db directory.

      "},{"location":"docker/tips/#restoring-from-a-backup","title":"Restoring from a Backup","text":"

      When the container is started with the ENABLE_RESTORE environment variable set to Y the container will automatically restore the latest backup from the /opt/cardano/cnode/backup/$NETWORK-db directory. The database will be restored when the container is started and if the backup directory is larger than the db directory.

      "}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Home","text":"

      This documentation site (rather the repository itself) is created by some of the well known and experienced community members and contains instructions/information about various guild tools which simplify various stake-ops (setting up, managing and monitoring pools) for operators. Note that the guides are present to help you simplify your tasks - but as an entity responsible for creating blocks on a financial platform, we expect some basic pre-requisite skill sets - at professional level - before entering the portal:

      Everyone is welcome to contribute to the repository (via documentation, testing, code, videos, etc). Our aim is to work together and reduce confusion rather than hosting 100 versions of documentation - each marketing their pool in a way.

      "},{"location":"#support","title":"Support","text":"

      The Telegram Support channel is used to announce new releases and changes to the code base. This is also the place to ask general questions regarding the documentation and scripts on this site.

      To report bugs and issues with scripts and documentation please open a GitHub Issue. Feature requests are best opened as a discussion thread.

      "},{"location":"#getting-started","title":"Getting Started","text":"

      Use the sidebar to navigate through the topics. Note that the instructions assume the folder structure as per here.

      Again, Feedback/Contribution and ownership of tasks is always welcome. If you're interested in collaborating regularly, make a start - and you should be part of the guild already .

      "},{"location":"basics/","title":"Basics","text":""},{"location":"basics/#architecture","title":"Architecture","text":"

      The architecture for various components are already described at docs.cardano.org by CF/IOHK. We will not reinvent the wheel

      "},{"location":"basics/#manual-software-pre-requirements","title":"Manual Software Pre-Requirements","text":"

      While we do not intend to hand out step-by-step instructions, the tools are often misused as a shortcut to avoid ensuring base skillsets mentioned on home page. Some of the common gotchas that we often find SPOs to miss out on:

      - It is imperative that pools operate with highly accurate system time, in order to propogate blocks to network in a timely manner and avoid penalties to own (or at times other competing) blocks. Please refer to sample guidance [here ](https://ubuntu.com/server/docs/network-ntp) for details - the precise steps may depend on your OS.\n- Ensure your Firewall rules at Network as well as OS level are updated according to the usage of your system, you'd want to whitelist the rules that you really need to open to world (eg: You might need node, SSH, and potentially secured webserver/proxy ports to be open, depending on components you run).\n- Update your SSH Configuration to prevent password-based logon.\n- Ensure that you use offline workflow, you should never require to have your offline keys on online nodes. The tools provide you backup/restore functionality to only pass online keys to online nodes.\n
      "},{"location":"basics/#pre-requisites","title":"Pre-Requisites","text":"

      Reminder !!

      You're expected to run the commands below from same session, using same working directories as indicated and using a non-root user with sudo access. You are expected to be familiar with this as part of pre-requisite skill sets for stake pool operators.

      "},{"location":"basics/#os-prereqs","title":"Set up OS packages, folder structure and fetch files from repo","text":"

      The pre-requisites for Linux systems are automated to be executed as a single script. To download the pre-requisites scripts, execute the below:

      mkdir \"$HOME/tmp\";cd \"$HOME/tmp\"\n# Install curl\n# CentOS / RedHat - sudo dnf -y install curl\n# Ubuntu / Debian - sudo apt -y install curl\ncurl -sS -o guild-deploy.sh https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/guild-deploy.sh\nchmod 755 guild-deploy.sh\n

      Please familiarise with the syntax of guild-deploy.sh before proceeding. The usage syntax can be checked using ./guild-deploy.sh -h , sample output below:

      Usage: guild-deploy.sh [-n <mainnet|preprod|guild|preview>] [-p path] [-t <name>] [-b <branch>] [-u] [-s [p][b][l][f][d][c][o][w][x]]\nSet up dependencies for building/using common tools across cardano ecosystem.\nThe script will always update dynamic content from existing scripts retaining existing user variables\n\n-n    Connect to specified network instead of mainnet network (Default: connect to cardano mainnet network) eg: -n guild\n-p    Parent folder path underneath which the top-level folder will be created (Default: /opt/cardano)\n-t    Alternate name for top level folder - only alpha-numeric chars allowed (Default: cnode)\n-b    Use alternate branch of scripts to download - only recommended for testing/development (Default: master)\n-u    Skip update check for script itself\n-s    Selective Install, only deploy specific components as below:\n  p   Install common pre-requisite OS-level Dependencies for most tools on this repo (Default: skip)\nb   Install OS level dependencies for tools required while building cardano-node/cardano-db-sync components (Default: skip)\nl   Build and Install libsodium fork from IO repositories (Default: skip)\nf   Force overwrite entire content of scripts and config files (backups of existing ones will be created) (Default: skip)\nd   Download latest (released) binaries for bech32, cardano-address, cardano-node, cardano-cli, cardano-db-sync and cardano-submit-api binaries (Default: skip)\nc   Install/Upgrade CNCLI binary (Default: skip) # (1)!\no   Install/Upgrade Ogmios Server binary (Default: skip)\nw   Install/Upgrade Cardano Hardware CLI (Default: skip)\nx   Install/Upgrade Cardano Signer binary (Default: skip)\n
      1. If you receive an error for glibc, it would likely be due to the build mismatch between pre-compiled binary and your OS, which is not uncommon. You may need to compile cncli manually on your OS as per instructions here - make sure to copy the output binary to \"${HOME}/.local/bin\" folder.

      This script uses opt-in election of what you'd like the script to do (as against previous version that used to try and auto-detect versions). The defaults without any arguments will only update static part of script contents for you. A typical example install to install most components but not overwrite static part of existing files for preview network would be:

      ./guild-deploy.sh -b master -n preview -t cnode -s pdlcowx\n. \"${HOME}/.bashrc\"\n

      If instead of download, you'd want to build the components yourself, you could use:

      ./guild-deploy.sh -b master -n preview -t cnode -s pblcowx\n. \"${HOME}/.bashrc\"\n

      Lastly, if you'd want to update your scripts but not install any additional dependencies, you may simply run:

      ./guild-deploy.sh -b master -n preview -t cnode\n
      "},{"location":"basics/#folder-structure","title":"Folder structure","text":"

      Running the script above will create the folder structure as per below, for your reference. You can replace the top level folder /opt/cardano/cnode by editing the value of CNODE_HOME in ~/.bashrc and $CNODE_HOME/files/env files:

      /opt/cardano/cnode            # Top-Level Folder\n\u251c\u2500\u2500 ...\n\u251c\u2500\u2500 files                     # Config, genesis and topology files\n\u2502   \u251c\u2500\u2500 ...\n\u2502   \u251c\u2500\u2500 byron-genesis.json    # Byron Genesis file referenced in config.json\n\u2502   \u251c\u2500\u2500 shelley-genesis.json  # Genesis file referenced in config.json\n\u2502   \u251c\u2500\u2500 alonzo-genesis.json    # Alonzo Genesis file referenced in config.json\n\u2502   \u251c\u2500\u2500 config.json           # Config file used by cardano-node\n\u2502   \u2514\u2500\u2500 topology.json         # Map of chain for cardano-node to boot from\n\u251c\u2500\u2500 db                        # DB Store for cardano-node\n\u251c\u2500\u2500 guild-db                  # DB Store for guild-specific tools and additions (eg: cncli, cardano-db-sync's schema)\n\u251c\u2500\u2500 logs                      # Logs for cardano-node\n\u251c\u2500\u2500 priv                      # Folder to store your keys (permission: 600)\n\u251c\u2500\u2500 scripts                   # Scripts to start and interact with cardano-node\n\u2514\u2500\u2500 sockets                   # Socket files created by cardano-node\n
      "},{"location":"build/","title":"Overview","text":"

      The documentation here uses instructions from IOHK repositories as a foundation, with additional info which we can contribute to where appropriate. Note that not everyone needs to build each component. You can refer to architecture to understand and qualify which of the components built by IO you want to run.

      "},{"location":"build/#components","title":"Components","text":"

      For most Pool Operators, simply building cardano-node should be enough. Use the below to decide whether you need other components:

      graph TB A([Interact with HD Walletslocally]) B([Explore blockchainlocally]) C([Easy pool-ops andfund management]) D([Create Custom Assets]) E([Monitor node using Terminal UI]) F([Sign/verify any datausing crypto keys]) N(Node) O(Ogmios) P(gRest/Koios) Q(DBSync) R(Wallet) S(CNTools) T(Tx Submit API) U(GraphQL) V(OfflineMetadataTools) X(gLiveView) Y(cardano-signer) Z[(PostgreSQL)] N --x C --x S N --x D --x S & V N --x E --x X N --x B B --x U --x Q B --x P --x Q P --x O P --x T F ---x Y N --x A --x R Q --x Z

      Important

      We strongly prefer use of gRest over GraphQL components due to performance, security, simplicity, control and most importantly - consistency benefits. Please refer to official documentations if you're interested in GraphQL or Cardano-Rest components instead.

      Note

      The instructions are intentionally limited to stack/cabal** to avoid wait times/availability of nix/docker files on a rapidly developing codebase - this also helps us prevent managing multiple versions of instructions.

      "},{"location":"build/#description-for-components-built-by-community","title":"Description for components built by community","text":""},{"location":"build/#cntools","title":"CNTools","text":"

      A swiss army knife for pool operators, primarily built by Ola, to simplify typical operations regarding their wallet keys and pool management. You can read more about it here

      "},{"location":"build/#gliveview","title":"gLiveView","text":"

      A local node monitoring tool, primarily built by Ola, to use in addition to remote monitoring tools like Prometheus/Grafana, Zabbix or IOG's RTView. This is especially useful when moving to a systemd deployment - if you haven't done so already - as it offers an intuitive UI to monitor the node status. You can read more about it here

      "},{"location":"build/#topology-updater","title":"Topology Updater","text":"

      A temporary node-to-node discovery solution, run by Markus, that was started initially to bridge the gap created while awaiting completion of P2P on cardano network, but has since become an important lifeline to the network health - to allow everyone to activate their relay nodes without having to postpone and wait for manual topology completion requests. You can read more about it here

      "},{"location":"build/#koiosgrest","title":"Koios/gRest","text":"

      A full-featured local query layer node to explore blockchain data (via dbsync) using standardised pre-built queries served via API as per standard from Koios - for which user can opt to participate in elastic query layer. You can read more about build steps here and reference API endpoints here

      "},{"location":"build/#ogmios","title":"Ogmios","text":"

      A lightweight bridge interface for cardano-node. It offers a WebSockets API that enables local clients to speak Ouroboros' mini-protocols via JSON/RPC. You can read more about it here

      "},{"location":"build/#cncli","title":"CNCLI","text":"

      A CLI tool written in Rust by Andrew Westberg for low-level communication with cardano-node. It is commonly used by SPOs to check their leader logs (integrates with CNTools as well as gLiveView) or to send their pool's health information to https://pooltool.io. You can read more about it here

      "},{"location":"build/#cardano-signer","title":"Cardano Signer","text":"

      A tool written by Martin to sign/verify data (hex, text or binary) using cryptographic keys to generate data as per CIP-8 or CIP-36 standards. You can read more about it here

      "},{"location":"contributors/","title":"Contributors","text":"

      Everyone is welcome to contribute to the guide, as well as the repository. Below is just a thank you to people who have been contributing consistently:

      Adam Chris Damjan Homer Markus OCG Ola Ahlman Pal Dorogi Papacarp PegasusPool Psychomb RdLrT RedOracle SmaugPool

      To start contributing, simply hit the github repository and raise Issue/Pull Request

      "},{"location":"grest-meets/","title":"GRest Meeting summaries","text":"

      Thank you all for joining and contributing to the project

      Below you can find a short summary of every GRest meeting held, both for logging purposes and for those who were not able to attend.

      "},{"location":"grest-meets/#participants","title":"Participants:","text":"Participant 16Sep2021 02Sep2021 26Aug2021 19Aug2021 12Aug2021 29Jul2021 22Jul2021 15Jul2021 09Jul2021 02Jul2021 25Jun2021 Damjan Homer Markus Ola RdLrT Red Papacarp Paddy GimbaLabs 16Sep2021 02Sep2021 26Aug2021 19Aug2021 12Aug2021 29Jul2021 22Jul2021 15Jul2021 09Jul2021 02Jul2021

      After the initial stand-up updates from participants, we went through the entire Trello board, updating/deleting existing tickets and creating some new ones.

      25Jun2021"},{"location":"grest-meets/#scheduling-running-update-queries","title":"Scheduling running update queries","text":""},{"location":"grest-meets/#refactor-of-queries","title":"Refactor of queries","text":""},{"location":"grest-meets/#postgres-tuning","title":"postgres tuning","text":""},{"location":"grest-meets/#updates","title":"Updates","text":""},{"location":"grest-meets/#queries","title":"Queries","text":""},{"location":"grest-meets/#problems","title":"Problems","text":""},{"location":"grest-meets/#actions","title":"Actions","text":""},{"location":"grest-meets/#queries_1","title":"Queries","text":""},{"location":"grest-meets/#transaction-submission-feature","title":"Transaction submission feature","text":""},{"location":"grest-meets/#db-replication-presentation-by-redoracle","title":"DB replication presentation by Redoracle","text":""},{"location":"grest-meets/#process-for-upgrading-our-instances","title":"Process for upgrading our instances:","text":""},{"location":"grest-meets/#queries_2","title":"Queries:","text":""},{"location":"grest-meets/#stake-distribution","title":"Stake distribution","text":""},{"location":"grest-meets/#tx-history","title":"Tx History","text":""},{"location":"grest-meets/#problems_1","title":"PROBLEMS","text":""},{"location":"grest-meets/#actions_1","title":"ACTIONS","text":""},{"location":"grest-meets/#problems_2","title":"PROBLEMS","text":""},{"location":"grest-meets/#actions_2","title":"ACTIONS","text":""},{"location":"grest-meets/#problems_3","title":"PROBLEMS","text":""},{"location":"grest-meets/#actions_3","title":"ACTIONS","text":"
      1. Team

        • catch live stake distributions in a separate table (in our grest schema)
          • these queries can run on a schedule
          • response comes from the instance with the latest data
        • other approaches:
          • possibly distribute pools between instances (complex approach)
          • run full query once and only check for new/leaving delegators (probably impossible because of existing delegator UTXO movements)
        • implement monitoring of execution times for all the queries
        • come up with a timeline for launch (next call)
        • stress test before launch
        • start building queries listed on Trello board
      2. Individual

        • sync db-sync instances to commit 84226d33eed66be8e61d50b7e1dacebdc095cee9 on release/10.1.x
        • update setups to reflect recent directory restructuring and updated instructions
      "},{"location":"grest-meets/#introduction-for-new-joiner-paddy","title":"Introduction for new joiner - Paddy","text":""},{"location":"grest-meets/#problems_4","title":"Problems","text":""},{"location":"grest-meets/#action-items","title":"Action Items","text":""},{"location":"grest-meets/#deployment-scripts","title":"Deployment scripts","text":"

      Ola added automatic deployment of services to the scripts last week. We added new tasks on Trello ticket, including flags for multiple networks (guild, testnet, mainnet), haproxy service dynamically creating hosts and doc updates. Overall, the script works well with some manual interaction still required at the moment.

      "},{"location":"grest-meets/#supported-networks","title":"Supported Networks","text":"

      Just for the record here, a 16GB (or even 8GB) instance is enough to support both testnet and guild networks.

      "},{"location":"grest-meets/#db-sync-versioning","title":"db-sync versioning","text":"

      We agreed to use the release/10.1.x branch which is not yet released but built to include Alonzo migrations to avoid rework later. This version does require Alonzo config and hash to be in the node's config.json. This has to be done manually and the files are available here. Once fully released, all members should rebuild the released version to ensure each instance is running the same code.

      "},{"location":"grest-meets/#dns-naming","title":"DNS naming","text":"

      For the DNS setup ticket, we started to think about the instance names for the 2 DNS instances (orange in the graph). Submissions for names will be made in the Telegram group, and will probably make a poll once we have the entries finalised.

      "},{"location":"grest-meets/#monitoring-system","title":"Monitoring System","text":"

      Priyank started setting up the monitoring on his instance which can then easily be switched to a separate monitoring instance. We agreed to use Prometheus / Grafana combo for data source / visualisation. We'll probably need to create some custom archiving of data to keep it long term as Prometheus stores only the last 30 days of data.

      "},{"location":"grest-meets/#next-meeting","title":"Next meeting","text":"

      We would like to make Friday @ 07:00 UTC the standard time and keep meetings at weekly frequency. A poll will still be created for next weeks, but if there are no objections / requests for switching the time around (which we have not had so far) we can go ahead with the making Friday the standard with polls no longer required and only reminders / Google invites sent every week.

      "},{"location":"grest-meets/#deployment-scripts_1","title":"Deployment scripts","text":"

      During the last week, work has been done on deployment scripts for all services (db-sync, gRest and haproxy) -> this is now in testing with updated instructions on trello. Everybody can put their name down on the ticket to signify when the setup is complete and note down any comments for bugs/improvements. This is the main priority at the moment as it would allow us to start transferring our setups to mainnet.

      "},{"location":"grest-meets/#switch-to-mainnet","title":"Switch to Mainnet","text":"

      Following on from that, we created a ticket for starting to set up mainnet instances -> we can use 32GB RAM to start and increase later. While making sure everything works against the guild network is priority, people are free to start on this as well as we anticipate we are almost ready for the switch.

      "},{"location":"grest-meets/#supported-networks_1","title":"Supported Networks","text":"

      This brings me to another discussion point which is on which networks are to be supported. After some discussion, it was agreed to keep beefy servers for mainnet, and have small independent instances for testnet maintained by those interested, while guild instance is pretty lightweight and useful to keep.

      "},{"location":"grest-meets/#monitoring-system_1","title":"Monitoring System","text":"

      The ticket for creating a centralised monitoring system was discussed and updated. I would say it would be good to have at least a basic version of the system in place around the time we switch to mainnet. The system could eventually serve for: - analysis of instance - performances and subsequent tuning - endpoints usage - anticipation of system requirements increases - etc.

      I would say that this should be an important topic of the next meeting to come up with an approach on how we will structure this system so that we can start building it in time for mainnet switch.

      "},{"location":"grest-meets/#handling-ssl","title":"Handling SSL","text":"

      Enabling SSL was agreed to not be required by each instance, but is optional and documentation should be created for how to automate the process of renewing SSL certificates for those wishing to add it to their instance. The end user facing endpoints \"Instance Checker\" will of course be SSL-enabled.

      "},{"location":"grest-meets/#next-meeting_1","title":"Next meeting","text":"

      We somewhat agreed to another meeting next week again at the same time, but some participants aren't 100% for availability. Friday at 07:00 UTC might be a good standard time we hold on to, but I will make a poll like last time so that we can get more info before confirming the meeting.

      "},{"location":"grest-meets/#meeting-structure","title":"Meeting Structure","text":"

      As this was the first meeting, at the start we discussed about the meeting structure. In general, we agreed to something like listed below, but this can definitely change in the future:

      1) 2-liner (60s) round the table stand-ups by everyone to sync up on what they were doing / are planning to do / mention struggles etc. This itself often sparks discussions. 2) going through the Trello board tasks with the intention of discussing and possbily assigning them to individuals / smaller groups (maybe 1-2-3 people choose to work together on a single task)

      "},{"location":"grest-meets/#stand-ups","title":"Stand-ups","text":"

      We then proceeded to give a status of where we are individually in terms of what's been done, a summary below:

      "},{"location":"grest-meets/#main-discussion-points","title":"Main discussion points","text":"
      1. Directory structure on the repo -> General agreement is to have anything related to db-sync/postgREST separated from the current cnode-helper-scripts directory. We can finalise the end locations of files a bit later, for now intent should be to simply add them all to /files/dbsync folder. prereqs.sh addendum can be done once artifacts are finalised (added a Trello ticket for tracking).
      2. DNS/haproxy configurations: We have two options: a. controlled approach for endpoints - wherein there is a layer of haproxy that will load balance and ensure tip being in sync for individual providers (individuals can provide haproxy OR gRest instances). b. completely decentralised - each client to maintain haproxy endpoint, and fails over to other node if its not up to recent tip. I think that in general, it was agreed to use a hybrid approach. Details are captured in diagram here. DNS endpoint can be reserved post initial testing of haproxy-agent against mainnet nodes.
      3. Internal monitoring system This would be important and useful and has not been mentioned before this meeting (as far as I know). Basically, a system for monitoring all of our instances together and also handling alerts. Not only for ensuring good quality of service, but also for logging and inspection of short- and long-term trends to better understand what's happening. A ticket is added to trello board
      "},{"location":"grest-meets/#next-meeting_2","title":"Next meeting","text":"

      All in all, I think we saw that there is need for these meetings as there are a lot of things to discuss and new ideas come up (like the monitoring system). We went for over an hour (~1h15min) and still didn't have enough time to go through the board, we basically only touched the DNS/haproxy part of the board. This tells me that we are in a stage where more frequent meetings are required, weekly instead of biweekly, as we are in the initial stage and it's important to build things right from the start rather than having to refactor later on. With that, the participants in general agreed to another meeting next week, but this will be confirmed in the TG chat and the times can be discussed then.

      "},{"location":"sidebar/","title":"Tree","text":""},{"location":"upgrade/","title":"Upgrade","text":"One-Time major upgrade for Koios Scripts from 20-Jan-2023 (expand for details)

      The scripts on guild-operators repository have gone through quite a few changes to accomodate for the below:

      Some of the above required us to add breaking changes to some scripts, but hopefully the above explains the premise for those changes. To ease this one-time upgrade process for existing deployments, we have tried to come up with the guide below, feel free to edit this file to improve the documents based on your experience. Again, apologies in advance to those who do not agree with the above changes (the old code would ofcourse remain unimpacted at tag legacy-scripts, so if you'd like to stick to old scripts , you can use -b legacy-scripts for your tools to switch back).

      "},{"location":"upgrade/#steps-for-ugrading","title":"Steps for Ugrading","text":"

      Warning

      Make sure you go through upgrade steps for your setup in a non-mainnet environment first!

      Remember

      Please add any environment-specific parameters (eg: custom top level folder, network flag, etc) to the execution command below, similar to prereqs.sh (check new syntax using guild-deploy.sh -h)

      mkdir \"$HOME/tmp\";cd \"$HOME/tmp\"\ncurl -sS -o guild-deploy.sh https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/guild-deploy.sh\nchmod 700 guild-deploy.sh\n./guild-deploy.sh -s f -b master\n
      source \"${HOME}\"/.bashrc\necho \"${PATH}\"\n

      You can move the binaries by using mv command (for example, if you dont have any other files in these folders, you can use the command below:

      Note

      Ideally, you should shutdown services (eg: cnode, cnode-dbsync, etc) prior to running the below to ensure they run from new location (you can also re-deploy them if you haven't done so in a while, eg: ./cnode.sh -d). At the end of the guide, you can start them back up.

      mv -t \"${HOME}\"/.local/bin/ \"${HOME}\"/.cabal/bin/* \"${HOME}\"/.cargo/bin/* \"${HOME}\"/bin/*\n
      whereis bech32 cardano-address cardano-cli cardano-db-sync cardano-hw-cli cardano-node cardano-submit-api cncli ogmios\n

      The above might result in some lines having more than one entry (eg: you might have cardano-cli in \"${HOME}\"/.cabal/bin and \"${HOME}\"/.local/bin) - for which you'd want to delete the reference(s) not in \"${HOME}\"/.local/bin , while for other cases - you might have no values (eg: you may not use cardano-db-sync, cncli, ogmios and/or cardano-hw-cli. You need not take any actions for the binaries you do not use.

      "},{"location":"upgrade/#supportimprovements","title":"Support/Improvements","text":"

      Hope the guide above helps you with the migration, but again - we could've missed some edge cases. If so, please report via chat in Koios Discussions channel only. Please DO NOT make edits to the script content based on forum/alternate guide/channels, while done with best intentions - there have been solutions put online that modify files unnecessarily instead of correcting configs and disabling updates, such actions will only cause trouble for future updates.

      "},{"location":"Appendix/RecoverByronWallet/","title":"Unofficial Instructions for recovering your Byron Era funds on the new Incentivized Shelley Testnet","text":""},{"location":"Appendix/RecoverByronWallet/#1-grab-and-install-haskell","title":"1. Grab and install Haskell","text":"
      curl -sSL https://get.haskellstack.org/ | sh\n
      "},{"location":"Appendix/RecoverByronWallet/#2-get-the-wallet","title":"2. Get the wallet","text":"

      note: you must build from source as of today as there are changes that just got into master you need

      git clone https://github.com/input-output-hk/cardano-wallet.git\n

      "},{"location":"Appendix/RecoverByronWallet/#3-go-into-the-wallet-directory","title":"3. Go into the wallet directory","text":"
      cd cardano-wallet\n
      "},{"location":"Appendix/RecoverByronWallet/#4-build-the-wallet","title":"4. Build the wallet","text":"

      stack build --test --no-run-tests\n
      If it fails there are a few reasons we have found: - The cardano build instructions reference a few things that may be missing. Check those. - or maybe one of these would help:

      "},{"location":"Appendix/RecoverByronWallet/#libssl","title":"Libssl:","text":"
      sudo apt install libssl-dev\n
      "},{"location":"Appendix/RecoverByronWallet/#sqlite","title":"Sqlite :","text":"
      sudo apt-get install sqlite3 libsqlite3-dev \n
      "},{"location":"Appendix/RecoverByronWallet/#gmp","title":"gmp:","text":"
      sudo apt-get install libgmp3-dev \n
      "},{"location":"Appendix/RecoverByronWallet/#systemd-dev","title":"systemd dev:","text":"
      sudo apt install libsystemd-dev\n

      get coffee... It takes awhile

      "},{"location":"Appendix/RecoverByronWallet/#5-when-its-done-install-executables-to-your-path","title":"5. When its done, install executables to your path","text":"
      stack install\n
      "},{"location":"Appendix/RecoverByronWallet/#6-test-to-make-sure-cardano-wallet-jormungandr-works-fine","title":"6. Test to make sure cardano-wallet-jormungandr works fine.","text":"

      Generate your new mnemonics you will need below. Note that this generates 15 words as opposed to your byron era mnemnomics which were only 12 words.

      cardano-wallet-jormungandr mnemonic generate\n
      "},{"location":"Appendix/RecoverByronWallet/#7-launch-the-wallet-as-a-service","title":"7. Launch the wallet as a service.","text":"

      you can either open another terminal window or use screen or something. anyway, wherever you run this next command you won't be able to use anymore for a terminal until you stop the wallet

      change --node-port 3001 to wherever you have your jormungandr rest interface running. for me it was 5001.. so

      change --port 3002 to wherever you want to access the wallet interface at. If you have other things running avoid those ports. for most, 3002 should be free

      just to future proof these instructions. genesis should be whatever genesis you are on.

      cardano-wallet-jormungandr serve --node-port 3001 --port 3002 --genesis-block-hash e03547a7effaf05021b40dd762d5c4cf944b991144f1ad507ef792ae54603197\n
      "},{"location":"Appendix/RecoverByronWallet/#8-restore-your-byron-wallet","title":"8. Restore your byron wallet:","text":"

      --->in another window

      replace foo, foo, foo with all your mnemnomics from the byron wallet you are restoring

      Also, if you put your wallet on a different port than 3002, fix that too

      curl -X POST -H \"Content-Type: application/json\" -d '{ \"name\": \"legacy_wallet\", \"mnemonic_sentence\": [\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\"], \"passphrase\": \"areallylongpassword\"}' http://localhost:3002/v2/byron-wallets\n
      Thats going to spit out some information about a wallet it creates, you should see the value of your wallet - hopefully its not zero. And you need the wallet ID for the next step

      "},{"location":"Appendix/RecoverByronWallet/#9-create-your-shelley-wallet","title":"9. Create your shelley wallet:","text":"

      Remember all those mnemnomics you made above.. put them here instead of all the foo's.

      curl -X POST -H \"Content-Type: application/json\" -d '{ \"name\": \"pool_wallet\", \"mnemonic_sentence\": [\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\",\"foo\"], \"passphrase\": \"areallylongpasswordagain\"}' http://localhost:3002/v2/wallets\n
      Important thing to get is the wallet id from this command

      "},{"location":"Appendix/RecoverByronWallet/#10-migrate-your-funds","title":"10. Migrate your funds","text":"

      Now you are ready to migrate your wallet. replace the <old wallet id> and <new wallet id> with the values you got above

      curl -X POST -H \"Content-Type: application/json\" -d '{\"passphrase\": \"areallylongpassword\"}' http://localhost:3002/v2/byron-wallets/<old wallet id>/migrations/<new wallet id>\n
      "},{"location":"Appendix/RecoverByronWallet/#11-congratulations-your-funds-are-now-in-your-new-wallet","title":"11. Congratulations. your funds are now in your new wallet.","text":"

      From here we recommend you send them to a new address entirely owned and created by jcli or whatever method you have been using for the testnet process.

      This technically may not be required. But a lot of us did it and we know it works for setting up pools and stuff.

      send a small amount first just to make sure you are in control of the transaction and don't send your funds to la la land.

      If you want to send to another address use the command below, but replace the address that you want to send it to, the amount, and your <new wallet id>

      curl -X POST -H \"Content-Type: application/json\" -d '{\"payments\": [ { \"address\": \"<address to send to>\"\", \"amount\": { \"quantity\": 83333330000000, \"unit\": \"lovelace\" } } ], \"passphrase\": \"areallylongpasswordagain\"}' http://localhost:3002/v2/wallets/<new wallet id>/transactions\n

      "},{"location":"Appendix/monitoring/","title":"Monitoring","text":"

      Ensure the Pre-Requisites are in place before you proceed.

      This is an easy-to-use script to automate setting up of monitoring tools. Tasks automates the following tasks: - Installs Prometheus, Node Exporter and Grafana Servers for your respective Linux architecture. - Configure Prometheus to connect to cardano node and node exporter jobs. - Provisions the installed prometheus server to be automatically available as data source in Grafana. - Provisions two of the common grafana dashboards used to monitor cardano-node by SkyLight and IOHK to be readily consumed from Grafana. - Deploy prometheus,node_exporter and grafana-server as systemd service on Linux. - Start and enable those services.

      Note that securing prometheus/grafana servers via TLS encryption and other security best practices are out of scope for this document, and its mainly aimed to help you get started with monitoring without much fuss.

      !> Ensure that you've opened the firewall port for grafana server (default used in this script is 5000)

      "},{"location":"Appendix/monitoring/#download-setup_monsh","title":"Download setup_mon.sh","text":"

      If you have run guild-deploy.sh, you can skip this step. To download monitoring script, you can execute the commands below:

      cd $CNODE_HOME/scripts\nwget https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/setup_mon.sh\nchmod 750 setup_mon.sh\n

      "},{"location":"Appendix/monitoring/#customise-any-environment-variables","title":"Customise any Environment Variables","text":"

      The default selection may not always be usable for everyone. You can customise further environment variable settings by opening in editor (eg: vi setup_mon.sh ), and updating variables below to your liking:

      #!/usr/bin/env bash\n# shellcheck disable=SC2209,SC2164\n\n######################################################################\n#### Environment Variables\n######################################################################\nCNODE_IP=127.0.0.1\nCNODE_PORT=12798\nGRAFANA_HOST=0.0.0.0\nGRAFANA_PORT=5000\nPROJ_PATH=/opt/cardano/monitoring\nPROM_HOST=127.0.0.1\nPROM_PORT=9090\nNEXP_PORT=$(( PROM_PORT + 1 ))\n````\n\n#### Set up Monitoring\n\nExecute setup_mon.sh with full path to destination folder you want to setup monitoring in. If you're following guild folder structure, you do not need to specify `-d`. Read the usage comments below before you run the actual script.\n\nNote that to deploy services as systemd, the script expect sudo access is available to the user running the script.\n\n``` bash\ncd $CNODE_HOME/scripts\n# To check Usage parameters:\n# ./setup_mon.sh -h\n#Usage: setup_mon.sh [-d directory] [-h hostname] [-p port]\n#Setup monitoring using Prometheus and Grafana for Cardano Node\n#-d directory      Directory where you'd like to deploy the packages for prometheus , node exporter and grafana\n#-i IP/hostname    IPv4 address or a FQDN/DNS name where your cardano-node (relay) is running (check for hasPrometheus in config.json; eg: 127.0.0.1 if same machine as cardano-node)\n#-p port           Port at which your cardano-node is exporting stats (check for hasPrometheus in config.json; eg: 12798)\n./setup_mon.sh\n# \n# Downloading prometheus v2.18.1...\n# Downloading grafana v7.0.0...\n# Downloading exporter v0.18.1...\n# Downloading grafana dashboard(s)...\n#   - SKYLight Monitoring Dashboard\n#   - IOHK Monitoring Dashboard\n# \n# NOTE: Could not create directory as rdlrt, attempting sudo ..\n# NOTE: No worries, sudo worked !! Moving on ..\n# Configuring components\n# Registering Prometheus as datasource in Grafana..\n# Creating service files as root..\n# \n# =====================================================\n# Installation is completed\n# =====================================================\n# \n# - Prometheus (default): http://127.0.0.1:9090/metrics\n#     Node metrics:       http://127.0.0.1:12798\n#     Node exp metrics:   http://127.0.0.1:9091\n# - Grafana (default):    http://0.0.0.0:5000\n# \n# \n# You need to do the following to configure grafana:\n# 0. The services should already be started, verify if you can login to grafana, and prometheus. If using 127.0.0.1 as IP, you can check via curl\n# 1. Login to grafana as admin/admin (http://0.0.0.0:5000)\n# 2. Add \"prometheus\" (all lowercase) datasource (http://127.0.0.1:9090)\n# 3. Create a new dashboard by importing dashboards (left plus sign).\n#   - Sometimes, the individual panel's \"prometheus\" datasource needs to be refreshed.\n# \n# Enjoy...\n# \n# Cleaning up...\n
      "},{"location":"Appendix/monitoring/#view-dashboards","title":"View Dashboards","text":"

      You should now be able to Login to grafana dashboard, using the public IP of your server, at port 5000. The initial credentials to login would be admin/admin, and you will be asked to update your password upon first login. Once logged on, you should be able to go to Manage > Dashboards and select the dashboard you'd like to view. Note that if you've just started the server, you might see graphs as empty, as initial interval for dashboards is 12 hours. You can change it to 5 minutes by looking at top right section of the page.

      Thanks to Pal Dorogi for the original setup instructions used for modifying.

      "},{"location":"Appendix/postgres/","title":"Sample Postgres Setup","text":"

      These deployment instructions used for reference while building cardano-db-sync tool, with the scope being ease of set up, and some tuning baselines for those who are new to Postgres DB. It is recommended to customise these as per your needs for Production builds.

      Important

      You'd find it pretty useful to set up ZFS on your system prior to setting up Postgres, to help with your IOPs throughput requirements. You can find sample install instructions here. You can set up your entire root mount to be on ZFS, or you can opt to mount a file as ZFS on \"${CNODE_HOME}\"

      "},{"location":"Appendix/postgres/#install-postgresql-server","title":"Install PostgreSQL Server","text":"

      Execute commands below to set up Postgres Server

      # Determine OS platform\nOS_ID=$( (grep -i ^ID_LIKE= /etc/os-release || grep -i ^ID= /etc/os-release) | cut -d= -f 2)\nDISTRO=$(grep -i ^NAME= /etc/os-release | cut -d= -f 2)\n\nif [ -z \"${OS_ID##*debian*}\" ]; then\n#Debian/Ubuntu\nwget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -\n  RELEASE=$(lsb_release -cs)\necho \"deb [arch=amd64] http://apt.postgresql.org/pub/repos/apt/ ${RELEASE}\"-pgdg main | sudo tee  /etc/apt/sources.list.d/pgdg.list\n  sudo apt-get update\n  sudo apt-get -y install postgresql-15 postgresql-server-dev-15 postgresql-contrib libghc-hdbc-postgresql-dev\n  sudo systemctl restart postgresql\n  sudo systemctl enable postgresql\nelse\necho \"We have no automated procedures for this ${DISTRO} system\"\nfi\n
      "},{"location":"Appendix/postgres/#create-user-in-postgres","title":"Create User in Postgres","text":"

      Login to Postgres instance as superuser:

      echo $(whoami)\n# <user>\nsudo su postgres\npsql\n

      Note the returned as the output of echo $(whoami) command. Replace all instance of in the documentation below. Execute the below in psql prompt. Replace and PasswordYouWant with your OS user (output of echo $(whoami) command executed above) and a password you'd like to authenticate to Postgres with:

      CREATE ROLE <user> SUPERUSER LOGIN;\nALTER USER <user> PASSWORD 'PasswordYouWant';\n\\q\n
      Type exit at shell to return to your user from postgres

      "},{"location":"Appendix/postgres/#verify-login-to-postgres-instance","title":"Verify Login to postgres instance","text":"
      export PGPASSFILE=$CNODE_HOME/priv/.pgpass\necho \"/var/run/postgresql:5432:cexplorer:*:*\" > $PGPASSFILE\nchmod 0600 $PGPASSFILE\npsql postgres\n# psql (15.0)\n# Type \"help\" for help.\n# \n# postgres=#\n
      "},{"location":"Appendix/postgres/#tuning-your-instance","title":"Tuning your instance","text":"

      Before you start populating your DB instance using dbsync data, now might be a good time to put some thought on to baseline configuration of your postgres instance by editing /etc/postgresql/15/main/postgresql.conf. Typically, you might find a lot of common standard practices parameters available in tuning guides. For our consideration, it would be nice to start with some baselines - for which we will use inputs from example here, which would need to be customised further to your environment and resources.

      In a typical Koios [gRest] setup, we use below for minimum viable specs (i.e. 64GB RAM, > 8 CPUs, >16K IOPs for ioping -q -S512M -L -c 10 -s8k . output when postgres data directory is on ZFS configured with max arc of 4GB), we find the below configuration to be the best common setup:

      Parameter Value Comment data_directory '/opt/cardano/cnode/guild-db/pgdb/15' Move postgres data directory to ZFS mount at /opt/cardano/cnode, ensure it's writable by postgres user effective_cache_size 8GB Be conservative as Node and DBSync by themselves will need ~32-40GB of RAM if ledger-state is enabled effective_io_concurrency 4 Can go higher if you have substantially higher IOPs/IO throughputs lc_time 'en_US.UTF-8' Just to use standard server-side time formatting between instances, can adapt to your preferences log_timezone 'UTC' For consistency, to avoid timezone confusions maintenance_work_mem 512MB Helps with vacuum/index/foreign key maintainance (with 4 workers, it's set to max 2GB) max_connections 200 Allow maximum of 200 connections, the koios connections are still controlled via postgrest db-pool max_parallel_maintenance_workers 4 Max workers postgres will use for maintainance max_parallel_workers 4 Max workers postgres will use across the system max_parallel_workers_per_gather 2 Parallel threads per query, do not increase to higher values as it will multiply memory usage max_wal_size 4GB Used for WAL automatic checkpoints (disabled later) max_worker_processes 4 Maximum number of background processes system can support min_wal_size 1GB Used for WAL automatic checkpoints (disabled later) random_page_cost 1.1 Use higher value if IOPs has trouble catching up (you can use 4 instead of 1.1) shared_buffers 4GB Conservative limit to allow for node/dbsync/zfs memory usage timezone 'UTC' For consistency, to avoid timezone confusions wal_buffers 16MB WAL consumption in shared buffer (disabled later) work_mem 16MB Base memory size before writing to temporary disk files

      In addition to above, due to the nature of usage by dbsync (synching from node and restart traversing back to last saved ledger-state snapshot), we leverage data retention on blockchain - as we're not affected by loss of volatile information upon a restart of instance. Thus, we can relax some of the data retention and protection against corruption related settings, as those are IOPs/CPU Load Average impacts that the instance does not need to spend. We'd recommend setting 3 of those below in your /etc/postgresql/15/main/postgresql.conf:

      Parameter Value wal_level minimal max_wal_senders 0 synchronous_commit off

      Once your changes are done, ensure to restart postgres service using sudo systemctl restart postgresql.

      "},{"location":"Build/dbsync/","title":"DBSync","text":"

      Important

      An average pool operator may not require cardano-db-sync at all. Please verify if it is required for your use as mentioned here.

      "},{"location":"Build/dbsync/#build-instructions","title":"Build Instructions","text":""},{"location":"Build/dbsync/#clone-the-repository","title":"Clone the repository","text":"

      Execute the below to clone the cardano-db-sync repository to $HOME/git folder on your system:

      cd ~/git\ngit clone https://github.com/input-output-hk/cardano-db-sync\ncd cardano-db-sync\n
      "},{"location":"Build/dbsync/#build-cardano-db-sync","title":"Build Cardano DB Sync","text":"

      You can use the instructions below to build the latest release of cardano-db-sync.

      git fetch --tags --all\ngit pull\n# Include the cardano-crypto-praos and libsodium components for db-sync\n# On CentOS 7 (GCC 4.8.5) we should also do\n# echo -e \"package cryptonite\\n  flags: -use_target_attributes\" >> cabal.project.local\n# Replace tag against checkout if you do not want to build the latest released version\ngit checkout $(curl -s https://api.github.com/repos/input-output-hk/cardano-db-sync/releases/latest | jq -r .tag_name)\n# Use `-l` argument if you'd like to use system libsodium instead of IOG fork of libsodium while compiling\n$CNODE_HOME/scripts/cabal-build-all.sh\n
      The above would copy the cardano-db-sync binary into ~/.local/bin folder.

      "},{"location":"Build/dbsync/#prepare-db-for-sync","title":"Prepare DB for sync","text":"

      Now that binaries are available, let's create our database (when going through breaking changes, you may need to use --recreatedb instead of --createdb used for the first time. Again, we expect that PGPASSFILE environment variable is already set (refer to the top of this guide for sample instructions):

      cd ~/git/cardano-db-sync\n# scripts/postgresql-setup.sh --dropdb #if exists already, will fail if it doesnt - thats OK\nscripts/postgresql-setup.sh --createdb\n# Password:\n# Password:\n# All good!\n

      Verify you can see \"All good!\" as above!

      "},{"location":"Build/dbsync/#create-symlink-to-schema-folder","title":"Create Symlink to schema folder","text":"

      DBSync instance requires the schema files from the git repository to be present and available to the dbsync instance. You can either clone the ~/git/cardano-db-sync/schema folder OR create a symlink to the folder and make it available to the startup command we will be using. We will use the latter in sample below:

      ln -s ~/git/cardano-db-sync/schema $CNODE_HOME/guild-db/schema\n
      "},{"location":"Build/dbsync/#restore-using-snapshot","title":"Restore using Snapshot","text":"

      If you're running a mainnet/preview/preprod instance of dbsync, you might want to consider use of dbsync snapshots as documented here. The snapshot files as of recent epoch are available via links in release notes.

      At high-level, this would involve steps as below (read and update paths as per your environment):

      # Replace the actual link below with the latest one from release notes\nwget https://update-cardano-mainnet.iohk.io/cardano-db-sync/13/db-sync-snapshot-schema-13-block-7622755-x86_64.tgz\nrm -rf ${CNODE_HOME}/guild-db/ledger-state ; mkdir -p ${CNODE_HOME}/guild-db/ledger-state\ncd -; cd ~/git/cardano-db-sync\nscripts/postgresql-setup.sh --restore-snapshot /tmp/dbsyncsnap.tgz ${CNODE_HOME}/guild-db/ledger-state\n# The restore may take a while, please be patient and do not interrupt the restore process. Once restore is successful, you may delete the downloaded snapshot as below:\n#   rm -f /tmp/dbsyncsnap.tgz\n
      "},{"location":"Build/dbsync/#test-running-dbsync-manually-at-terminal","title":"Test running dbsync manually at terminal","text":"

      In order to verify that you can run dbsync, before making a start - you'd want to ensure that you can run it interactively once. To do so, try the commands below:

      cd $CNODE_HOME/scripts\nexport PGPASSFILE=$CNODE_HOME/priv/.pgpass\n./dbsync.sh\n

      You can monitor logs if needed via parallel session using tail -10f $CNODE_HOME/logs/dbsync.json. If there are no error, you would want to press Ctrl-C to stop the dbsync.sh execution and deploy it as a systemd service. To do so, use the commands below (the creation of file is done using sudo permissions, but you can always deploy it manually):

      cd $CNODE_HOME/scripts\n./dbsync.sh -d\n# Deploying cnode-dbsync.service as systemd service..\n# cnode-dbsync.service deployed successfully!!\n

      Now to start dbsync instance, you can run sudo systemctl start cnode-dbsync

      Note

      Note that dbsync while syncs, it might defer creation of indexes/constraints to speed up initial catch up. Once relatively closer to tip, this will initiate creation of indexes - which can take a while in background. Thus, you might notice the query timings right after reaching to tip might not be as good.

      "},{"location":"Build/dbsync/#update-dbsync","title":"Update DBSync","text":"

      Updating dbsync can have different tasks depending on the versions involved. We attempt to briefly explain the tasks involved:

      "},{"location":"Build/dbsync/#validation","title":"Validation","text":"

      To validate, connect to your postgres instance and execute commands as per below:

      export PGPASSFILE=$CNODE_HOME/priv/.pgpass\npsql cexplorer\n

      You should be at the psql prompt, you can check the tables and verify they're populated:

      \\dt\nselect * from meta;\n

      A sample output of the above two commands may look like below (the number of tables and names may vary between versions):

      cexplorer=# \\dt\nList of relations\n Schema |           Name            | Type  | Owner\n--------+---------------------------+-------+-------\n public | ada_pots                  | table | centos\n public | admin_user                | table | centos\n public | block                     | table | centos\n public | delegation                | table | centos\n public | delisted_pool             | table | centos\n public | epoch                     | table | centos\n public | epoch_param               | table | centos\n public | epoch_stake               | table | centos\n public | ma_tx_mint                | table | centos\n public | ma_tx_out                 | table | centos\n public | meta                      | table | centos\n public | orphaned_reward           | table | centos\n public | param_proposal            | table | centos\n public | pool_hash                 | table | centos\n public | pool_meta_data            | table | centos\n public | pool_metadata             | table | centos\n public | pool_metadata_fetch_error | table | centos\n public | pool_metadata_ref         | table | centos\n public | pool_owner                | table | centos\n public | pool_relay                | table | centos\n public | pool_retire               | table | centos\n public | pool_update               | table | centos\n public | pot_transfer              | table | centos\n public | reserve                   | table | centos\n public | reserved_ticker           | table | centos\n public | reward                    | table | centos\n public | schema_version            | table | centos\n public | slot_leader               | table | centos\n public | stake_address             | table | centos\n public | stake_deregistration      | table | centos\n public | stake_registration        | table | centos\n public | treasury                  | table | centos\n public | tx                        | table | centos\n public | tx_in                     | table | centos\n public | tx_metadata               | table | centos\n public | tx_out                    | table | centos\n public | withdrawal                | table | centos\n(37 rows)\n\n\n\nselect * from meta;\n id |     start_time      | network_name\n----+---------------------+--------------\n  1 | 2017-09-23 21:44:51 | mainnet\n(1 row)\n
      "},{"location":"Build/graphql/","title":"Graphql","text":"

      !> We have stopped maintaining documentation for Cardano-GraphQL, and prefer use of PostgREST instead. The specific component does not follow the process/technology/language (requires npm, yarn) used by other components (cabal/stack), and the value provided by cardano-graphql over the (haskell-based) hasura instance has been negligible. Also, an average pool operator may not require cardano-graphql at all, please verify if it is required for your use as mentioned here. The instructions below are out of date.

      Ensure the Pre-Requisites are in place before you proceed.

      "},{"location":"Build/graphql/#build-hasura-graphql-engine","title":"Build Hasura graphql-engine","text":"

      Going with the spirit of the documentation here, instruction to build the graphql-engine binary :)

      cd ~/git\ngit clone https://github.com/hasura/graphql-engine\ncd graphql-engine/server\n$CNODE_HOME/scripts/cabal-build-all.sh\n
      This should make graphql-engine available at ~/.local/bin.

      "},{"location":"Build/graphql/#build-cardano-graphql","title":"Build cardano-graphql","text":"

      The build will fail if you are running a version of node.js earlier than 10.0.0 (which could happen if you have a conflicting version in your $PATH). You can verify your node version by executing the below:

      #check your version of node.js\nnode -v\n#if response is 10.0.0 or higher build can proceed. \n

      The commands below will help you compile the cardano-graphql node:

      cd ~/git\ngit clone https://github.com/input-output-hk/cardano-graphql\ncd cardano-graphql\ngit checkout v1.1.1\nyarn\n#yarn install v1.22.4\n# [1/4] Resolving packages...\n# [2/4] Fetching packages...\n# info fsevents@2.1.2: The platform \"linux\" is incompatible with this module.\n# info \"fsevents@2.1.2\" is an optional dependency and failed compatibility check. Excluding it from installation.\n# info fsevents@1.2.12: The platform \"linux\" is incompatible with this module.\n# info \"fsevents@1.2.12\" is an optional dependency and failed compatibility check. Excluding it from installation.\n# [3/4] Linking dependencies...\n# warning \" > graphql-type-datetime@0.2.4\" has incorrect peer dependency \"graphql@^0.13.2\".\n# warning \" > @typescript-eslint/eslint-plugin@1.13.0\" has incorrect peer dependency \"eslint@^5.0.0\".\n# warning \" > @typescript-eslint/parser@1.13.0\" has incorrect peer dependency \"eslint@^5.0.0\".\n# [4/4] Building fresh packages...\n# Done in 20.70s.\nyarn build\n# yarn run v1.22.4\n# $ yarn codegen:internal && yarn codegen:external && tsc -p . && shx cp src/schema.graphql dist/\n# $ graphql-codegen\n#   \u2714 Parse configuration\n#   \u2714 Generate outputs\n# $ graphql-codegen --config ./codegen.external.yml\n#   \u2714 Parse configuration\n#   \u2714 Generate outputs\n# Done in 38.11s.\ncd dist\nrsync -arvh ../node_modules ./\n

      "},{"location":"Build/graphql/#set-up-environment-for-cardano-graphql","title":"Set up environment for cardano-graphql","text":"

      cardano-graphql requires cardano-node, cardano-db-sync-extended, postgresql and graphql-engine to be set up and running. The below will help you map the components:

      export PGPASSFILE=$CNODE_HOME/priv/.pgpass\nIFS=':' read -r -a PGPASS <<< $(cat $PGPASSFILE)\nexport HASURA_GRAPHQL_ENABLE_TELEMETRY=false  # Optional.  To send usage data to Hasura, set to true.\nexport HASURA_GRAPHQL_DATABASE_URL=postgres://${PGPASS[3]}:${PGPASS[4]}@${PGPASS[0]}:${PGPASS[1]}/${PGPASS[2]}\nexport HASURA_GRAPHQL_ENABLE_CONSOLE=true\nexport HASURA_GRAPHQL_ENABLED_LOG_TYPES=\"startup, http-log, webhook-log, websocket-log, query-log\"\nexport HASURA_GRAPHQL_SERVER_PORT=4080\nexport HASURA_GRAPHQL_SERVER_HOST=0.0.0.0\nexport CACHE_ENABLED=true\nexport HASURA_URI=http://127.0.0.1:4080\ncd ~/git/cardano-graphql/dist\ngraphql-engine serve &\nnode index.js\n

      "},{"location":"Build/grest-changelog/","title":"Koios gRest Changelog","text":""},{"location":"Build/grest-changelog/#110rc-for-all-networks","title":"[1.1.0rc] - For all networks.","text":"

      This will be first major [breaking] release for Koios consumers in a while, and will be rolled out under new base prefix (/api/v1). The major work with this release was to start making use of newer flags in dbsync which help performance of queries under new endpoints. Also, you'd see quite a few new endpoint additions below, that'd be helping out with slightly lighter version of queries. To keep migration paths easier, we will ensure both v0 and v1 versions of the release is up for a month post release, before retiring v0.

      "},{"location":"Build/grest-changelog/#new-endpoints-added","title":"New endpoints added:","text":""},{"location":"Build/grest-changelog/#data-inputoutput-changes","title":"Data Input/Output Changes:","text":""},{"location":"Build/grest-changelog/#deprecations","title":"Deprecations:","text":""},{"location":"Build/grest-changelog/#chores","title":"Chores:","text":""},{"location":"Build/grest-changelog/#1010-for-all-networks","title":"[1.0.10] - For all networks.","text":"

      The release is effectively same as 1.0.10rc except with one minor modification below.

      "},{"location":"Build/grest-changelog/#chores_1","title":"Chores:","text":""},{"location":"Build/grest-changelog/#1010rc-for-non-mainnet-networks","title":"[1.0.10rc] - For non-mainnet networks","text":"

      This release primarily focuses on ability to support better DeFi projects alongwith some value addition for existing clients by bringing in 10 new endpoints (paired with 2 deprecations), and few additional optional input parameters , and some additional output columns to existing endpoints. The only breaking change/fix is for output returned for tx_info.

      Also, dbsync 13.1.x.x has been released and is recommended to be used for this release

      "},{"location":"Build/grest-changelog/#new-endpoints-added_1","title":"New endpoints added","text":""},{"location":"Build/grest-changelog/#data-inputoutput-changes_1","title":"Data Input/Output Changes","text":""},{"location":"Build/grest-changelog/#deprecations_1","title":"Deprecations:","text":""},{"location":"Build/grest-changelog/#chores_2","title":"Chores:","text":""},{"location":"Build/grest-changelog/#109-for-all-networks","title":"[1.0.9] - For all networks","text":"

      This release is effectively same as 1.0.9rc below (please check out the notes accordingly), just with minor bug fix on setup-grest.sh itself.

      "},{"location":"Build/grest-changelog/#109rc-for-non-mainnet-networks","title":"[1.0.9rc] - For non-mainnet networks","text":"

      This release candidate is non-breaking for existing methods and inputs, but breaking for output objects for endpoints. The aim with release candidate version is to allow folks couple of weeks to test, adapt their libraries before applying to mainnet.

      "},{"location":"Build/grest-changelog/#new-endpoints-added_2","title":"New endpoints added","text":""},{"location":"Build/grest-changelog/#data-inputoutput-changes_2","title":"Data Input/Output changes","text":""},{"location":"Build/grest-changelog/#changes-for-instance-providers","title":"Changes for Instance Providers","text":""},{"location":"Build/grest-changelog/#108-for-all-networks","title":"[1.0.8] - For all networks","text":"

      This release is contains minor bug-fixes that were discovered in koios-1.0.7. No major changes to output for this one.

      "},{"location":"Build/grest-changelog/#changes-for-api","title":"Changes for API","text":""},{"location":"Build/grest-changelog/#new-endpoints-added_3","title":"New endpoints added","text":""},{"location":"Build/grest-changelog/#data-inputoutput-changes_3","title":"Data Input/Output changes","text":""},{"location":"Build/grest-changelog/#changes-for-instance-providers_1","title":"Changes for Instance Providers","text":""},{"location":"Build/grest-changelog/#107-for-all-networks","title":"[1.0.7] - For all networks","text":"

      This release continues updates from koios-1.0.6 to further utilise stake-snapshot cache tables which would be useful for SPOs as well as reduce downtime post epoch transition. One largely requested feature to accept bulk inputs for many block/address/account endpoints is now complete. Additionally, koios instance providers are now recommended to use cardano-node 1.35.3 with dbsync 13.0.5.

      "},{"location":"Build/grest-changelog/#changes-for-api_1","title":"Changes for API","text":""},{"location":"Build/grest-changelog/#new-endpoints-added_4","title":"New endpoints added","text":""},{"location":"Build/grest-changelog/#data-inputoutput-changes_4","title":"Data Input/Output changes","text":""},{"location":"Build/grest-changelog/#changes-for-instance-providers_2","title":"Changes for Instance Providers","text":""},{"location":"Build/grest-changelog/#106106m-interim-release-for-all-networks-to-upgrade-to-dbsync-v13","title":"[1.0.6/1.0.6m] - Interim release for all networks to upgrade to dbsync v13","text":"

      The backlog of items not being added to mainnet has been increasing due to delays with Vasil HFC event to Mainnet. As such we had to come up with a split update approach. The mainnet nodes are still not qualified to be Vasil-ready (in our opinion) for 1.35.x , but dbsync 13 can be used against node 1.34.1 fine. In order to cater for this split, we have added an intermediate koios-1.0.6m tag that brings dbsync updates while maintaining node-1.34.1.

      "},{"location":"Build/grest-changelog/#changes-for-api_2","title":"Changes for API","text":""},{"location":"Build/grest-changelog/#data-output-changes","title":"Data Output Changes","text":""},{"location":"Build/grest-changelog/#changes-for-instance-providers_3","title":"Changes for Instance Providers","text":""},{"location":"Build/grest-changelog/#105-alpha-networks-only","title":"[1.0.5] - alpha networks only","text":"

      Since there have been a few deviations wrt Vasil for testnet and mainnet, this version only targets networks except Mainnet!

      "},{"location":"Build/grest-changelog/#changes-for-api_3","title":"Changes for API","text":""},{"location":"Build/grest-changelog/#data-output-changes_1","title":"Data Output Changes","text":""},{"location":"Build/grest-changelog/#changes-for-instance-providers_4","title":"Changes for Instance Providers","text":""},{"location":"Build/grest-changelog/#101","title":"[1.0.1]","text":""},{"location":"Build/grest-changelog/#100","title":"[1.0.0]","text":""},{"location":"Build/grest-changelog/#100-rc1","title":"[1.0.0-rc1]","text":""},{"location":"Build/grest-changelog/#changes-for-api_4","title":"Changes for API","text":""},{"location":"Build/grest-changelog/#data-output-changes_2","title":"Data Output Changes","text":""},{"location":"Build/grest-changelog/#input-parameter-changes","title":"Input Parameter Changes","text":""},{"location":"Build/grest-changelog/#changes-for-instance-providers_5","title":"Changes for Instance Providers","text":""},{"location":"Build/grest-changelog/#added","title":"Added","text":""},{"location":"Build/grest-changelog/#fixed","title":"Fixed","text":""},{"location":"Build/grest-changelog/#100-rc0-2022-04-29","title":"[1.0.0-rc0] - 2022-04-29","text":""},{"location":"Build/grest/","title":"Koios gRest","text":"

      Important

      "},{"location":"Build/grest/#what-is-grest","title":"What is gRest","text":"

      gRest is an open source implementation of a query layer built over dbsync using PostgREST and HAProxy. The package is built as part of Koios team's efforts to unite community individual stream of work together and give back a more aligned structure to query dbsync and adopt standardisation to queries utilising open-source tooling as well as collaboration. In addition to these, there are also accessibility features to deploy rules for failover, do healthchecks, set up priorities, have ability to prevent DDoS attacks, provide timeouts, report tips for analysis over a longer period, etc - which can prove to be really useful when performing any analysis for instances.

      Note

      Note that the scripts below do allow for provisioning ogmios integration too, but Ogmios - currently - is not designed to provide advanced session management for a server-client architecture in absence of a middleware. Thus, the availability for ogmios from monitoring instance is restricted to avoid ability to DDoS an instance.

      "},{"location":"Build/grest/#components","title":"Components","text":"
      1. PostgREST: An RPC JSON interface for any PostgreSQL database (in our case, database served via cardano-db-sync) to provide a RESTful Web Service. The endpoints of PostgREST in itself are essentially the table/functions defined in elected schema via grest config file. You can read more about advanced query syntax using PostgREST API here, but we will provide a simpler view using examples towards the end of the page. It is an easy alternative - with almost no overhead as it directly serves the underlying database as an API, as compared to Cardano GraphQL component (which may often have lags). Some of the other advantages of PostgREST over graphql based projects are also performance, being stateless, 0 overhead, support for JWT / native Postgres DB authentication against the Rest Interface as well.

      2. HAProxy: An easy gateway proxy that automatically provides failover/basic DDoS protection, specify rules management for load balancing, setup multiple frontend/backends, provide easy means to have TLS enabled for public facing instances, etc. You may alter the settings for proxy layer as per your SecOps preferences. This component is optional (eg: if you prefer to expose your PostgREST server itself, you can do so using similar steps below).

      "},{"location":"Build/grest/#setup","title":"Setup gRest services","text":"

      To start with you'd want to ensure your current shell session has access to Postgres credentials, continuing from examples from the above mentioned Sample Postgres deployment guide.

      cd $CNODE_HOME/priv\nPGPASSFILE=$CNODE_HOME/priv/.pgpass\npsql cexplorer\n

      Ensure that you can connect to your Postgres DB fine using above (quit from psql once validated using \\q). As part of guild-deploy.sh execution, you'd find setup-grest.sh file made available in ${CNODE_HOME}/scripts folder, which will help you automate installation of PostgREST, HAProxy as well as brings in latest queries/functions provided via Koios to your instances.

      Warning

      As of now, gRest services are in alpha stage - while can be utilised, please remember there may be breaking changes and every collaborator is expected to work with the team to keep their instances up-to-date using alpha branch.

      Familiarise with the usage options for the setup script , the syntax can be viewed as below:

      cd \"${CNODE_HOME}\"/scripts\n./setup-grest.sh -h\n#\n# Usage: setup-grest.sh [-f] [-i [p][r][m][c][d]] [-u] [-b <branch>]\n# \n# Install and setup haproxy, PostgREST, polling services and create systemd services for haproxy, postgREST and dbsync\n# \n# -f    Force overwrite of all files including normally saved user config sections\n# -i    Set-up Components individually. If this option is not specified, components will only be installed if found missing (eg: -i prcd)\n#     p    Install/Update PostgREST binaries by downloading latest release from github.\n#     r    (Re-)Install Reverse Proxy Monitoring Layer (haproxy) binaries and config\n#     m    Install/Update Monitoring agent scripts\n#     c    Overwrite haproxy, postgREST configs\n#     d    Overwrite systemd definitions\n# -u    Skip update check for setup script itself\n# -q    Run all DB Queries to update on postgres (includes creating grest schema, and re-creating views/genesis table/functions/triggers and setting up cron jobs)\n# -b    Use alternate branch of scripts to download - only recommended for testing/development (Default: master)\n#\n

      To run the setup overwriting all standard deployment tasks from a branch (eg: koios-1.0.9 branch), you may want to use:

      ./setup-grest.sh -f -i prmcd -r -q -b koios-1.0.9\n

      Similarly - if you'd like to re-install all components and force overwrite all configs but not reset cache tables, you may run:

      ./setup-grest.sh -f -i prmcd -q\n

      Another example could be to preserve your config, but only update queries using an alternate branch (eg: let's say you want to try the branch alpha prior to a tagged release). To do so, you may run:

      ./setup-grest.sh -q -b alpha\n

      Please ensure to follow the on-screen instructions, if any (for example restarting deployed services, or updating configs to specify correct target postgres URLs/enable TLS/add peers etc in ${CNODE_HOME}/priv/grest.conf and ${CNODE_HOME}/files/haproxy.cfg).

      The default ports used will make haproxy instance available at port 8053 or 8453 if TLS is enabled (you might want to enable firewall rule to open this port to services you would like to access). If you want to prevent unauthenticated access to grest schema, uncomment the jwt-secret and specify a custom secret-token.

      Reminder

      Once you've successfully deployed the grest instance, it will deploy certain cron jobs that will ensure the relevant cache tables are updated periodically. Until these have finished (especially on first run, it could take an hour or so on mainnet, your instance will likely not pass any tests from grest-poll.sh but that's expected.

      "},{"location":"Build/grest/#tls","title":"Enable TLS on HAProxy","text":"

      In order to enable SSL on your haproxy, all you need to do is edit the file ${CNODE_HOME}/files/haproxy.cfg and update the frontend app section to uncomment ssl bind (and comment normal bind).

      Info

      If you're not familiar with how to configure TLS OR would not like to buy one, you can find tips on how to create a TLS certificate for free via LetsEncrypt using tutorials here. Once you do have a TLS Certificate generated, you need to chain the private key and full chain cert together in a file - /etc/ssl/server.pem - which can be then referenced as below:

      frontend app\n  #bind 0.0.0.0:8053\n  ## If using SSL, comment line above and uncomment line below\n  bind :8453 ssl crt /etc/ssl/server.pem no-sslv3\n  http-request set-log-level silent\n  acl srv_down nbsrv(grest_postgrest) eq 0\n  acl is_wss hdr(Upgrade) -i websocket\n  ...\n
      Restart haproxy service for changes to take effect.

      "},{"location":"Build/grest/#validation","title":"Validation","text":"

      With the setup, you also have a checkstatus.sh script, which will query the Postgres DB instance via haproxy (coming through postgREST), and only show an instance up if the latest block in your DB instance is within 180 seconds.

      Important

      If you'd like to participate in joining to the elastic cluster via Koios, please raise a PR request by editing topology files in this folder to do so!!

      If you were using guild network, you could do a couple of very basic sanity checks as per below:

      1. To query active stake for pool pool1z2ry6kxywgvdxv26g06mdywynvs7jj3uemnxv273mr5esukljsr in epoch 122, we can execute the below:

        curl -d _pool_bech32=pool1z2ry6kxywgvdxv26g06mdywynvs7jj3uemnxv273mr5esukljsr -d _epoch_no=122 -s http://localhost:8053/rpc/pool_active_stake\n## {\"active_stake_sum\" : 19409732875}\n

      2. To check latest owner key(s) for a given pool pool1z2ry6kxywgvdxv26g06mdywynvs7jj3uemnxv273mr5esukljsr, you can execute the below:

        curl -d _pool_bech32=pool1z2ry6kxywgvdxv26g06mdywynvs7jj3uemnxv273mr5esukljsr -s http://localhost:8050/rpc/pool_owners\n## [{\"owner\" : \"stake_test1upx5p04dn3t6dvhfh27744su35vvasgaaq565jdxwlxfq5sdjwksw\"}, {\"owner\" : \"stake_test1uqak99cgtrtpean8wqwp7d9taaqkt9gkkxga05m5azcg27chnzfry\"}]\n

      You may want to explore what all endpoints come out of the box, and test them out, to do so - refer to API documentation for OpenAPI3 documentation. Each endpoint has a pre-filled example for mainnet and connects by default to primary Koios endpoint, allowing you to test endpoints and if needed - grab the curl commands to start testing yourself against your local or remote instances.

      "},{"location":"Build/grest/#participating-in-koios-cluster-as-instance-provider","title":"Participating in Koios Cluster as instance Provider","text":"

      If you're interested to participate in decentralised infrastructure by providing an instance, there are a few additional steps you'd need:

      1. Enable ports for your HAProxy instance (default: 8053), gRest Exporter service (default: 8059) and (optionally) submit API instance (default: 8090) against the monitoring instance (do not need to open these ports to internet) of corresponding network.

      2. Ensure that each of the service above is listening on your public IP address (for instance, submitapi.sh might need to be edited to change HOSTADDR to 0.0.0.0 and restarted).

      3. Create a PR specifying connectivity information to your HAProxy port here.

      4. Make sure to join the telegram discussions group to participate in any discussions, actions, polls for new-features, etc. Feel free to give a shout in the group in case you have trouble following any of the above

      "},{"location":"Build/node-cli/","title":"Node & CLI","text":"

      Reminder !!

      Ensure the Pre-Requisites are in place before you proceed.

      "},{"location":"Build/node-cli/#build-instructions","title":"Build Instructions","text":""},{"location":"Build/node-cli/#clone-the-repository","title":"Clone the repository","text":"

      Execute the below to clone the cardano-node repository to $HOME/git folder on your system:

      cd ~/git\ngit clone https://github.com/input-output-hk/cardano-node\ncd cardano-node\n
      "},{"location":"Build/node-cli/#build-cardano-node","title":"Build Cardano Node","text":"

      You can use the instructions below to build the latest release of cardano-node.

      git fetch --tags --all\ngit pull\n# Replace tag against checkout if you do not want to build the latest released version\ngit checkout $(curl -s https://api.github.com/repos/input-output-hk/cardano-node/releases/latest | jq -r .tag_name)\n\n# Use `-l` argument if you'd like to use system libsodium instead of IOG fork of libsodium while compiling\n$CNODE_HOME/scripts/cabal-build-all.sh\n

      The above would copy the binaries built into ~/.local/bin folder.

      "},{"location":"Build/node-cli/#download-pre-compiled-binary-from-node-release","title":"Download pre-compiled Binary from Node release","text":"

      While certain folks might want to build the node themselves (could be due to OS/arch compatibility, trust factor or customisations), for most it might not make sense to build the node locally. Instead, you can download the binaries using cardano-node release notes, where-in you can find the download links for every version. Once downloaded, you would want to make it available to preferred PATH in your environment (if you're asking how - that'd mean you've skipped skillsets mentioned on homepage).

      "},{"location":"Build/node-cli/#verify","title":"Verify","text":"

      Execute cardano-cli and cardano-node to verify output as below (the exact version and git rev should depend on your checkout tag on github repository):

      cardano-cli version\n# cardano-cli 8.1.2 - linux-x86_64 - ghc-8.10\n# git rev <...>\ncardano-node version\n# cardano-node 8.1.2 - linux-x86_64 - ghc-8.10\n# git rev <...>\n
      "},{"location":"Build/node-cli/#update-port-number-or-pool-name-for-relative-paths","title":"Update port number or pool name for relative paths","text":"

      Before you go ahead with starting your node, you may want to update values for CNODE_PORT in $CNODE_HOME/scripts/env. Note that it is imperative for operational relays and pools to ensure that the port mentioned is opened via firewall to the destination your node is supposed to connect from. Update your network/firewall configuration accordingly. Future executions of guild-deploy.sh will preserve and not overwrite these values.

      CNODEBIN=\"${HOME}/.local/bin/cardano-node\"\nCCLI=\"${HOME}/.local/bin/cardano-cli\"\nCNODE_PORT=6000\nPOOL_NAME=\"GUILD\"\n

      Important

      POOL_NAME is the name of folder that you will use when registering pools and starting node in core mode. This folder would typically contain your hot.skey,vrf.skey and op.cert files required. If the mentioned files are absent, the node will automatically start in a passive mode. Note that in case CNODE_PORT is changed, you'd want to re-do the deployment of systemd service as mentioned later in the guide

      "},{"location":"Build/node-cli/#start-the-node","title":"Start the node","text":"

      To test starting the node in interactive mode, you can use the pre-built script below (cnode.sh) (note that your node logs are being written to $CNODE_HOME/logs folder, you may not see much output beyond Listening on http://127.0.0.1:12798). This script automatically determines whether to start the node as a relay or block producer (if the required pool keys are present in the $CNODE_HOME/priv/pool/<POOL_NAME> as mentioned above). The script contains a user-defined variable CPU_CORES which determines the number of CPU cores the node will use upon start-up:

      ######################################\n# User Variables - Change as desired #\n# Common variables set in env file   #\n######################################\n\n#CPU_CORES=2            # Number of CPU cores cardano-node process has access to (please don't set higher than physical core count, 2-4 recommended)\n
      You can uncomment this and set to the desired number, but be wary not to go above your physical core count.
      cd \"${CNODE_HOME}\"/scripts\n./cnode.sh\n

      Ensure you do not have any errors in the console. To stop the node, hit Ctrl-C - we will start the node as systemd later in the document.

      "},{"location":"Build/node-cli/#modify-the-node-to-p2p-mode","title":"Modify the node to P2P mode","text":"

      Note

      The section below only refer to mainnet, as Guildnet/Preview/Preprod templates already come with P2P as default mode, and do not require steps below

      In case you prefer to start the node in P2P mode (ideally, only on relays), you can do so by replacing the config.json and topology.json files in $CNODE_HOME/files folder. You can find a sample of these two files that can be downloaded using commands below:

      cd \"${CNODE_HOME}\"/files\nmv config.json config.json.bkp_$(date +%s)\nmv topology.json topology.json.bkp_$(date +%s)\ncurl -sL -f \"https://raw.githubusercontent.com/cardano-community/guild-operators/master/files/config-mainnet.p2p.json\" -o config.json\ncurl -sL -f \"https://raw.githubusercontent.com/cardano-community/guild-operators/alpha/files/topology-mainnet.json\" -o topology.json\n

      Once downloaded, you'd want to update config.json (if you want to update any port/path references or change tracers from default) and the topology.json file to include your core/relay nodes in localRoots section (replacing dummy values currently in place with \"127.0.0.1\" address. The P2P topology file provides you few public nodes as a fallback to avoid single point of reliance, being IO provided mainnet nodes. You can also remove/update any additional peers as per your preference.

      Once updated, since you modified the file manually - there is always a chance of human errors (eg: missing comma/quotes). Thus, we would recommend you to start the node interactively once again before proceeding.

      cd \"${CNODE_HOME}\"/scripts\n./cnode.sh\n

      Ensure you do not have any errors in the console. To stop the node, hit Ctrl-C - we will start the node as systemd later in the document.

      Note

      An average pool operator may not require cardano-submit-api at all. Please verify if it is required for your use as mentioned here. If - however - you do run submit-api for accepting sizeable transaction load, you would want to override the default MEMPOOL_BYTES by uncommenting it in cnode.sh.

      "},{"location":"Build/node-cli/#start-the-submit-api","title":"Start the submit-api","text":"

      cardano-submit-api is one of the binaries built as part of cardano-node repository and allows you to submit transactions over a Web API. To run this service interactively, you can use the pre-built script below (submitapi.sh). Make sure to update submitapi.sh script to change listen IP or Port that you'd want to make this service available on.

      cd $CNODE_HOME/scripts\n./submitapi.sh\n

      To stop the process, hit Ctrl-C

      "},{"location":"Build/node-cli/#systemd","title":"Run as systemd service","text":"

      The preferred way to run the node (and submit-api) is through a service manager like systemd. This section explains how to setup a systemd service file.

      1. Deploy as a systemd service Execute the below command to deploy your node as a systemd service (from the respective scripts folder):

      cd $CNODE_HOME/scripts\n./cnode.sh -d\n# Deploying cnode.service as systemd service..\n# cnode.service deployed successfully!!\n\n./submitapi.sh -d\n# Deploying cnode-submit-api.service as systemd service..\n# cnode-submit-api deployed successfully!!\n

      2. Start the service Run below commands to enable automatic start of service on startup and start it.

      sudo systemctl start cnode.service\nsudo systemctl start cnode-submit-api.service\n

      3. Check status and stop/start commands Replace status with stop/start/restart depending on what action to take.

      sudo systemctl status cnode.service\nsudo systemctl status cnode-submit-api.service\n

      Important

      In case you see the node exit unsuccessfully upon checking status, please verify you've followed the transition process correctly as documented below, and that you do not have another instance of node already running. It would help to check your system logs (/var/log/syslog for debian-based and /var/log/messages for Red Hat/CentOS/Fedora systems, you can also check journalctl -f -u <service> to examine startup attempt for services) for any errors while starting node.

      You can use gLiveView to monitor your node that was started as a systemd service.

      cd $CNODE_HOME/scripts\n./gLiveView.sh\n
      "},{"location":"Build/offchain-metadata-tools/","title":"Offchain Metadata Tools","text":"

      Important

      In the Cardano multi-asset era, this project helps you create and submit metadata describing your assets, storing them off-chain.

      "},{"location":"Build/offchain-metadata-tools/#download-pre-built-binaries","title":"Download pre-built binaries","text":"

      Go to input-output-hk/offchain-metadata-tools to download the binaries and place in a directory specified by PATH, e.g. $HOME/.local/bin/.

      "},{"location":"Build/offchain-metadata-tools/#build-instructions","title":"Build Instructions","text":"

      An alternative to pre-built binaries - instructions describe how to build the token-metadata-creator tool but the offchain-metadata-tools repository contains other tools as well. Build the ones needed for your installation.

      "},{"location":"Build/offchain-metadata-tools/#clone-the-repository","title":"Clone the repository","text":"

      Execute the below to clone the offchain-metadata-tools repository to $HOME/git folder on your system:

      cd ~/git\ngit clone https://github.com/input-output-hk/offchain-metadata-tools.git\ncd offchain-metadata-tools/token-metadata-creator\n
      "},{"location":"Build/offchain-metadata-tools/#build-token-metadata-creator","title":"Build token-metadata-creator","text":"

      You can use the instructions below to build token-metadata-creator, same steps can be executed in future to update the binaries (replacing appropriate tag) as well.

      git fetch --tags --all\ngit pull\n# Replace master with appropriate tag if you'd like to avoid compiling against master\ngit checkout master\n$CNODE_HOME/scripts/cabal-build-all.sh\n
      The above would copy the binaries into ~/.local/bin folder.

      "},{"location":"Build/offchain-metadata-tools/#verify","title":"Verify","text":"

      Verify that the tool is executable from anywhere by running:

      token-metadata-creator -h\n
      "},{"location":"Build/wallet/","title":"Wallet","text":"

      !> - An average pool operator may not require cardano-wallet at all. Please verify if it is required for your use as mentioned here.

      Ensure the Pre-Requisites are in place before you proceed.

      "},{"location":"Build/wallet/#build-instructions","title":"Build Instructions","text":"

      Follow instructions below for building the cardano-wallet binary:

      "},{"location":"Build/wallet/#clone-the-repository","title":"Clone the repository","text":"

      Execute the below to clone the cardano-wallet repository to $HOME/git folder on your system:

      cd ~/git\ngit clone https://github.com/input-output-hk/cardano-wallet\ncd cardano-wallet\n
      "},{"location":"Build/wallet/#build-cardano-wallet","title":"Build Cardano Wallet","text":"

      You can use the instructions below to build the latest release of cardano-wallet.

      !> - Note that the latest release of cardano-wallet may not work with the latest release of cardano-node. Please check the compatibility of each cardano-wallet release yourself in the official docs, e.g. https://github.com/input-output-hk/cardano-wallet/releases/latest.

      git fetch --tags --all\ngit pull\n# Replace tag against checkout if you do not want to build the latest released version\ngit checkout $(curl -s https://api.github.com/repos/input-output-hk/cardano-wallet/releases/latest | jq -r .tag_name)\n$CNODE_HOME/scripts/cabal-build-all.sh\n

      The above would copy the binaries into ~/.local/bin folder.

      "},{"location":"Build/wallet/#start-the-wallet","title":"Start the wallet","text":"

      You can run the below to connect to a cardano-node instance that is expected to be already running and the wallet will start syncing.

      cardano-wallet serve /\n    --node-socket $CNODE_HOME/sockets/node0.socket /\n    --mainnet / # if using the testnet flag you also need to specify the testnet shelley-genesis.json file\n--database $CNODE_HOME/priv/wallet\n

      "},{"location":"Build/wallet/#verify-the-wallet-is-handling-requests","title":"Verify the wallet is handling requests","text":"

      cardano-wallet network information\n
      Expected output should be similar to the following
      Ok.\n{\n\"network_tip\": {\n\"time\": \"2021-06-01T17:31:05Z\",\n\"epoch_number\": 269,\n\"absolute_slot_number\": 31002374,\n\"slot_number\": 157574\n},\n\"node_era\": \"mary\",\n\"node_tip\": {\n\"height\": {\n\"quantity\": 5795127,\n\"unit\": \"block\"\n},\n\"time\": \"2021-06-01T17:31:00Z\",\n\"epoch_number\": 269,\n\"absolute_slot_number\": 31002369,\n\"slot_number\": 157569\n},\n\"sync_progress\": {\n\"status\": \"ready\"\n},\n\"next_epoch\": {\n\"epoch_start_time\": \"2021-06-04T21:44:51Z\",\n\"epoch_number\": 270\n}\n}\n

      "},{"location":"Build/wallet/#creatingrestoring-wallet","title":"Creating/Restoring Wallet","text":"

      If you're creating a new wallet, you'd first want to generate a mnemonic for use (see below):

      cardano-wallet recovery-phrase generate\n# false brother typical saddle settle phrase foster sauce ask sunset firm gate service render burger\n
      You can use the above mnemonic to then restore a wallet as per below:
      cardano-wallet wallet create from-recovery-phrase MyWalletName\n

      "},{"location":"Build/wallet/#expected-output","title":"Expected output:","text":"
      Please enter a 15\u201324 word recovery phrase: false brother typical saddle settle phrase foster sauce ask sunset firm gate service render burger\n(Enter a blank line if you do not wish to use a second factor.)\nPlease enter a 9\u201312 word second factor:\nPlease enter a passphrase: **********\nEnter the passphrase a second time: **********\nOk.\n{\n    ...\n}\n
      "},{"location":"Scripts/blockperf/","title":"BlockPerf","text":"

      Reminder !!

      Ensure the Pre-Requisites are in place before you proceed.

      blockPerf.sh is a script to monitor the network propagation of new blocks as seen by the local cardano-node.

      "},{"location":"Scripts/blockperf/#block-propagation-traces","title":"Block propagation traces","text":"

      Although blockPerf can also run on the block producer, it makes the most sense to run it on the upstream relays. There it waits for each new block announced to the relay over the network by its remote peers.

      It looks for the delay times that result

      You can view this data locally as a console stream, or run it as a systemd service in background.

      BlockPerf also sends this data to the TopologyUpdater server, so that there is a possibility to compare this data (similar to sendtip to pooltool). As a contributing operator you get the possibility to see how your own relays compare to other nodes regarding receive quality, delay times and thus performance.

      There is no connection or constraint between the TopologyUpdater Relay subscription and the BlockPerf analysis. BlockPerf is even designed to work outside the cnTools suite.

      The results of these data are a good basis to make optimizations and to evaluate which changes were useful or might by required to improve the performance compared to other relay nodes.

      "},{"location":"Scripts/blockperf/#installation","title":"Installation","text":"

      The script is best run as a background process. This can be accomplished in many ways but the preferred method is to run it as a systemd service. A terminal multiplexer like tmux or screen could also be used but not covered here.

      "},{"location":"Scripts/blockperf/#run-as-service","title":"Run as service","text":"

      Use the deploy-as-systemd.sh script to create a systemd unit file. In this setup the script is started in \"service\" mode. Error/Warn level log output is handled by syslog and end up in the systems standard syslog file, normally /var/log/syslog. journalctl -f -u cnode-tu-blockperf.service can be used to check service output (follow mode).

      Outside the cnTools environment call blockPerf.sh -d to install it as a systemd service.

      "},{"location":"Scripts/blockperf/#console-view","title":"Console view","text":"

      If you run blockPerf local in the console (scripts/blockPerf.sh) , immediately after the appearance of a new block it shows where it came from, how many slots away from the previous block it was, and how many milliseconds the individual steps took.

      Block:.... 6860534\n Slot..... 52833850 (+59s)\n ......... 2022-02-09 09:49:01\n Header... 2022-02-09 09:49:02,780 (+1780 ms)\n Request.. 2022-02-09 09:49:02,780 (+0 ms)\n Block.... 2022-02-09 09:49:02,830 (+50 ms)\n Adopted.. 2022-02-09 09:49:02,900 (+70 ms)\n Size..... 79976 bytes\n delay.... 1.819971868 sec\n From..... 104.xxx.xxx.61:3001\n\nBlock:.... 6860535\n Slot..... 52833857 (+7s)\n ......... 2022-02-09 09:49:08\n Header... 2022-02-09 09:49:08,960 (+960 ms)\n Request.. 2022-02-09 09:49:08,970 (+10 ms)\n Block.... 2022-02-09 09:49:09,020 (+50 ms)\n Adopted.. 2022-02-09 09:49:09,090 (+70 ms)\n Size..... 64950 bytes\n delay.... 1.028341023 sec\n From..... 34.xxx.xxx.15:4001\n
      "},{"location":"Scripts/blockperf/#collaborative-web-view","title":"Collaborative web view","text":"

      A further aim of the blockPerf project is to use the data that individual nodes send to the central TopologyUpdater database to produce graphical visualisations and evaluations that provide the participating node operators with useful insights into their performance compared to all others.

      "},{"location":"Scripts/cncli/","title":"CNCLI","text":"

      Reminder !!

      Ensure the Pre-Requisites are in place before you proceed.

      cncli.sh is a script to download and deploy CNCLI created and maintained by Andrew Westberg. It's a community-based CLI tool written in RUST for low-level cardano-node communication. Usage is optional and no script is dependent on it. The main features include:

      "},{"location":"Scripts/cncli/#installation","title":"Installation","text":"

      cncli.sh script's main functions, sync, leaderlog, validate and PoolTool sendslots/sendtip are not meant to be run manually, but instead deployed as systemd services that run in the background to do the block scraping and validation automatically. Additional commands exist for manual execution to initiate the sqlite db, filling the blocklog DB with all blocks created by the pool known to the blockchain, migration of old cntoolsBlockCollector JSON blocklog, and re-validation of blocks and leaderlogs. See usage output below for a complete list of available commands.

      The script works in tandem with Log Monitor to provide faster adopted status but mainly to catch slots the node is leader for but are unable to create a block for. These are marked as invalid. Blocklog will however work fine without the logMonitor service and CNCLI is able to handle everything except catching invalid blocks.

      1. Run the latest version of guild-deploy.sh with guild-deploy.sh -s c to download and install RUST and CNCLI. IOG fork of libsodium required by CNCLI is automatically compiled by CNCLI build process. If a previous installation is found, RUST and CNCLI will be updated to the latest version.
      2. Run deploy-as-systemd.sh to deploy the systemd services that handle all the work in the background. Six systemd services in total are deployed whereof four are related to CNCLI. See above for the different purposes they serve.
      3. If you want to disable some of the deployed services, run sudo systemctl disable <service>

      4. cnode.service (main cardano-node launcher)

      5. cnode-cncli-sync.service
      6. cnode-cncli-leaderlog.service
      7. cnode-cncli-validate.service
      8. cnode-cncli-ptsendtip.service
      9. cnode-cncli-ptsendslots.service
      10. cnode-logmonitor.service (see Log Monitor)
      "},{"location":"Scripts/cncli/#configuration","title":"Configuration","text":"

      You can override the values in the script at the User Variables section shown below. POOL_ID, POOL_VRF_SKEY and POOL_VRF_VKEY should automatically be detected if POOL_NAME is set in the common env file and can be left commented. PT_API_KEY and POOL_TICKER need to be set in the script if PoolTool sendtip/sendslots are to be used before starting the services. For the rest of the commented values, if the defaults do not provide the right values, uncomment and make adjustments.

      #POOL_ID=\"\"                               # Automatically detected if POOL_NAME is set in env. Required for leaderlog calculation & pooltool sendtip, lower-case hex pool id\n#POOL_VRF_SKEY=\"\"                         # Automatically detected if POOL_NAME is set in env. Required for leaderlog calculation, path to pool's vrf.skey file\n#POOL_VRF_VKEY=\"\"                         # Automatically detected if POOL_NAME is set in env. Required for block validation, path to pool's vrf.vkey file\n#PT_API_KEY=\"\"                            # POOLTOOL sendtip: set API key, e.g \"a47811d3-0008-4ecd-9f3e-9c22bdb7c82d\"\n#POOL_TICKER=\"\"                           # POOLTOOL sendtip: set the pools ticker, e.g. \"TCKR\"\n#PT_HOST=\"127.0.0.1\"                      # POOLTOOL sendtip: connect to a remote node, preferably block producer (default localhost)\n#PT_PORT=\"${CNODE_PORT}\"                  # POOLTOOL sendtip: port of node to connect to (default is CNODE_PORT from the env file)\n#CNCLI_DIR=\"${CNODE_HOME}/guild-db/cncli\" # path to the directory for cncli sqlite db\n#SLEEP_RATE=60                            # CNCLI leaderlog/validate: time to wait until next check (in seconds)\n#CONFIRM_SLOT_CNT=600                     # CNCLI validate: require at least these many slots to have passed before validating\n#CONFIRM_BLOCK_CNT=15                     # CNCLI validate: require at least these many blocks on top of minted before validating\n#TIMEOUT_LEDGER_STATE=300                 # CNCLI leaderlog: timeout in seconds for ledger-state query\n#BATCH_AUTO_UPDATE=N                      # Set to Y to automatically update the script if a new version is available without user interaction\n
      "},{"location":"Scripts/cncli/#run","title":"Run","text":"

      Services are controlled by sudo systemctl <status|start|stop|restart> <service name> All services are configured as child services to cnode.service and as such, when an action is taken against this service it's replicated to all child services. E.g running sudo systemctl start cnode.service will also start all child services.

      Log output is handled by syslog and end up in the systems standard syslog file, normally /var/log/syslog. journalctl -f -u <service> can be used to check service output (follow mode). Other logging configurations are not covered here.

      Recommended workflow to get started with CNCLI blocklog.

      1. Install and deploy services according to Installation section.
      2. Set required user variables according to Configuration section.
      3. (optional) If a previous blocklog db exist created by cntoolsBlockCollector, run this command to migrate json storage to new SQLite DB:
      4. $CNODE_HOME/scripts/cncli.sh migrate <path> where is the location to the directory containing all blocks_.json files.
      5. Start deployed services with:
      6. sudo systemctl start cnode-cncli-sync.service (starts leaderlog, validate & ptsendslots automatically)
      7. sudo systemctl start cnode-logmonitor.service
      8. sudo systemctl start cnode-cncli-ptsendtip.service (optional but recommended)
      9. alternatively restart the main service that will trigger a start of all services with:
      10. sudo systemctl restart cnode.service
      11. Run init command to fill the db with all blocks made by your pool known to the blockchain
      12. $CNODE_HOME/scripts/cncli.sh init
      13. Enjoy full blocklog automation and visit View Blocklog section for instructions on how to show blocks from the blocklog DB.
      14. Usage: cncli.sh [operation <sub arg>]\nScript to run CNCLI, best launched through systemd deployed by 'deploy-as-systemd.sh'\n\nsync        Start CNCLI chainsync process that connects to cardano-node to sync blocks stored in SQLite DB (deployed as service)\nleaderlog   One-time leader schedule calculation for current epoch, then continuously monitors and calculates schedule for coming epochs, 1.5 days before epoch boundary on the mainnet (deployed as service)\n  force     Manually force leaderlog calculation and overwrite even if already done, exits after leaderlog is calculated\nvalidate    Continuously monitor and confirm that the blocks made actually was accepted and adopted by chain (deployed as service)\n  all       One-time re-validation of all blocks in blocklog db\n  epoch     One-time re-validation of blocks in blocklog db for the specified epoch \nptsendtip   Send node tip to PoolTool for network analysis and to show that your node is alive and well with a green badge (deployed as service)\nptsendslots Securely sends PoolTool the number of slots you have assigned for an epoch and validates the correctness of your past epochs (deployed as service)\ninit        One-time initialization adding all minted and confirmed blocks to blocklog\nmigrate     One-time migration from old blocklog (cntoolsBlockCollector) to new format (post cncli)\n  path      Path to the old cntoolsBlockCollector blocklog folder holding json files with blocks created\n
        "},{"location":"Scripts/cncli/#view-blocklog","title":"View Blocklog","text":"

        Best and easiest viewed in CNTools and gLiveView but the blocklog database is a SQLite DB so if you are comfortable with SQL, the sqlite3 command can be used to query the DB.

        Block status

        - Leader    : Scheduled to make block at this slot\n- Ideal     : Expected/Ideal number of blocks assigned based on active stake (sigma)\n- Luck      : Leader slots assigned vs ideal slots for this epoch\n- Adopted   : Block created successfully\n- Confirmed : Block created validated to be on-chain with the certainty set in `cncli.sh` for `CONFIRM_BLOCK_CNT`\n- Missed    : Scheduled at slot but no record of it in CNCLI DB and no other pool has made a block for this slot\n- Ghosted   : Block created but marked as orphaned and no other pool has made a valid block for this slot -> height battle or block propagation issue\n- Stolen    : Another pool has a valid block registered on-chain for the same slot\n- Invalid   : Pool failed to create block, base64 encoded error message can be decoded with `echo <base64 hash> | base64 -d | jq -r`\n
        CNTools

        Open CNTools and select [b] Blocks to open the block viewer. Either select Epoch and enter the epoch you want to see a detailed view for or choose Summary to display blocks for last x epochs.

        If the node was elected to create blocks in the selected epoch it could look something like this:

        Summary
         >> BLOCKS\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nCurrent epoch: 96\n\n+--------+---------------------------+----------------------+--------------------------------------+\n| Epoch  | Leader | Ideal | Luck     | Adopted | Confirmed  | Missed | Ghosted | Stolen | Invalid  |\n+--------+---------------------------+----------------------+--------------------------------------+\n| 96     | 34     | 31.66 | 107.39%  | 18      | 18         | 0      | 0       | 0      | 0        |\n| 95     | 32     | 30.57 | 104.68%  | 32      | 32         | 0      | 0       | 0      | 0        |\n+--------+---------------------------+----------------------+--------------------------------------+\n\n[h] Home | [b] Block View | [i] Info | [*] Refresh\n
        Epoch
         >> BLOCKS\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nCurrent epoch: 96\n\n+---------------------------+----------------------+--------------------------------------+\n| Leader | Ideal | Luck     | Adopted | Confirmed  | Missed | Ghosted | Stolen | Invalid  |\n+---------------------------+----------------------+--------------------------------------+\n| 34     | 31.66 | 107.39%  | 18      | 18         | 0      | 0       | 0      | 0        |\n+---------------------------+----------------------+--------------------------------------+\n\n+-----+------------+----------+---------------------+--------------------------+-------+-------------------------------------------------------------------+\n| #   | Status     | Block    | Slot | SlotInEpoch  | Scheduled At             | Size  | Hash                                                              |\n+-----+------------+----------+---------------------+--------------------------+-------+-------------------------------------------------------------------+\n| 1   | confirmed  | 2043444  | 11142827 | 40427    | 2020-11-16 08:34:03 CET  | 3     | ec216d3fb01e4a3cc3e85305145a31875d9561fa3bbcc6d0ee8297236dbb4115  |\n| 2   | confirmed  | 2044321  | 11165082 | 62682    | 2020-11-16 14:44:58 CET  | 3     | b75c33a5bbe49a74e4b4cc5df4474398bfb10ed39531fc65ec2acc51f89ddce5  |\n| 3   | confirmed  | 2044397  | 11166970 | 64570    | 2020-11-16 15:16:26 CET  | 3     | c1ea37fd72543779b6dab46e3e29e0e422784b5fd6188f828ace9eabcc87088f  |\n| 4   | confirmed  | 2044879  | 11178909 | 76509    | 2020-11-16 18:35:25 CET  | 3     | 35a116cec80c5dc295415e4fc8e6435c562b14a5d6833027006c988706c60307  |\n| 5   | confirmed  | 2046965  | 11232557 | 130157   | 2020-11-17 09:29:33 CET  | 3     | d566e5a1f6a3d78811acab4ae3bdcee6aa42717364f9afecd6cac5093559f466  |\n| 6   | confirmed  | 2047101  | 11235675 | 133275   | 2020-11-17 10:21:31 CET  | 3     | 3a638e01f70ea1c4a660fe4e6333272e6c61b11cf84dc8a5a107b414d1e057eb  |\n| 7   | confirmed  | 2047221  | 11238453 | 136053   | 2020-11-17 11:07:49 CET  | 3     | 843336f132961b94276603707751cdb9a1c2528b97100819ce47bc317af0a2d6  |\n| 8   | confirmed  | 2048692  | 11273507 | 171107   | 2020-11-17 20:52:03 CET  | 3     | 9b3eb79fe07e8ebae163870c21ba30460e689b23768d2e5f8e7118c572c4df36  |\n| 9   | confirmed  | 2049058  | 11282619 | 180219   | 2020-11-17 23:23:55 CET  | 3     | 643396ea9a1a2b6c66bb83bdc589fa19c8ae728d1f1181aab82e8dfe508d430a  |\n| 10  | confirmed  | 2049321  | 11289237 | 186837   | 2020-11-18 01:14:13 CET  | 3     | d93d305a955f40b2298247d44e4bc27fe9e3d1486ef3ef3e73b235b25247ccd7  |\n| 11  | confirmed  | 2049747  | 11299205 | 196805   | 2020-11-18 04:00:21 CET  | 3     | 19a43deb5014b14760c3e564b41027c5ee50e0a252abddbfcac90c8f56dc0245  |\n| 12  | confirmed  | 2050415  | 11316075 | 213675   | 2020-11-18 08:41:31 CET  | 3     | dd2cb47653f3bfb3ccc8ffe76906e07d96f1384bafd57a872ddbab3b352403e3  |\n| 13  | confirmed  | 2050505  | 11318274 | 215874   | 2020-11-18 09:18:10 CET  | 3     | deb834bc42360f8d39eefc5856bb6d7cabb6b04170c842dcbe7e9efdf9dbd2e1  |\n| 14  | confirmed  | 2050613  | 11320754 | 218354   | 2020-11-18 09:59:30 CET  | 3     | bf094f6fde8e8c29f568a253201e4b92b078e9a1cad60706285e236a91ec95ff  |\n| 15  | confirmed  | 2050807  | 11325239 | 222839   | 2020-11-18 11:14:15 CET  | 3     | 21f904346ba0fd2bb41afaae7d35977cb929d1d9727887f541782576fc6a62c9  |\n| 16  | confirmed  | 2050997  | 11330062 | 227662   | 2020-11-18 12:34:38 CET  | 3     | 109799d686fe3cad13b156a2d446a544fde2bf5d0e8f157f688f1dc30f35e912  |\n| 17  | confirmed  | 2051286  | 11336791 | 234391   | 2020-11-18 14:26:47 CET  | 3     | bb1beca7a1d849059110e3d7dc49ecf07b47970af2294fe73555ddfefb9561a8  |\n| 18  | confirmed  | 2051734  | 11348498 | 246098   | 2020-11-18 17:41:54 CET  | 3     | 87940b53c2342999c1ba4e185038cda3d8382891a16878a865f5114f540683de  |\n| 19  | leader     | -        | 11382001 | 279601   | 2020-11-19 03:00:17 CET  | -     | -                                                                 |\n| 20  | leader     | -        | 11419959 | 317559   | 2020-11-19 13:32:55 CET  | -     | -                                                                 |\n| 21  | leader     | -        | 11433174 | 330774   | 2020-11-19 17:13:10 CET  | -     | -                                                                 |\n| 22  | leader     | -        | 11434241 | 331841   | 2020-11-19 17:30:57 CET  | -     | -                                                                 |\n| 23  | leader     | -        | 11435289 | 332889   | 2020-11-19 17:48:25 CET  | -     | -                                                                 |\n| 24  | leader     | -        | 11440314 | 337914   | 2020-11-19 19:12:10 CET  | -     | -                                                                 |\n| 25  | leader     | -        | 11442361 | 339961   | 2020-11-19 19:46:17 CET  | -     | -                                                                 |\n| 26  | leader     | -        | 11443861 | 341461   | 2020-11-19 20:11:17 CET  | -     | -                                                                 |\n| 27  | leader     | -        | 11446997 | 344597   | 2020-11-19 21:03:33 CET  | -     | -                                                                 |\n| 28  | leader     | -        | 11453110 | 350710   | 2020-11-19 22:45:26 CET  | -     | -                                                                 |\n| 29  | leader     | -        | 11455323 | 352923   | 2020-11-19 23:22:19 CET  | -     | -                                                                 |\n| 30  | leader     | -        | 11505987 | 403587   | 2020-11-20 13:26:43 CET  | -     | -                                                                 |\n| 31  | leader     | -        | 11514983 | 412583   | 2020-11-20 15:56:39 CET  | -     | -                                                                 |\n| 32  | leader     | -        | 11516010 | 413610   | 2020-11-20 16:13:46 CET  | -     | -                                                                 |\n| 33  | leader     | -        | 11518958 | 416558   | 2020-11-20 17:02:54 CET  | -     | -                                                                 |\n| 34  | leader     | -        | 11533254 | 430854   | 2020-11-20 21:01:10 CET  | -     | -                                                                 |\n+-----+------------+----------+---------------------+--------------------------+-------+-------------------------------------------------------------------+\n
        gLiveView

        Currently shows a block summary for current epoch. For full block details use CNTools for now. Invalid, missing, ghosted and stolen blocks only shown in case of a non-zero value.

        \u2502--------------------------------------------------------------\u2502\n\u2502 BLOCKS   Leader  | Ideal  | Luck    | Adopted | Confirmed    \u2502\n\u2502          24        27.42    87.53%    1         1            \u2502\n\u2502          08:07:57 until leader XXXXXXXXX.....................\u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n
        "},{"location":"Scripts/cntools-changelog/","title":"Changelog","text":"

        All notable changes to this tool will be documented in this file.

        Whenever you're updating between versions where format/hash of keys have changed , or you're changing networks - it is recommended to Backup your Wallet and Pool folders before you proceed with launching cntools on a fresh network.

        The format is based on Keep a Changelog, and this adheres to Semantic Versioning.

        "},{"location":"Scripts/cntools-changelog/#1102-2023-10-30","title":"[11.0.2] - 2023-10-30","text":""},{"location":"Scripts/cntools-changelog/#fixed","title":"Fixed","text":"
        • Fix additional Ada printing. Now omits trailing zeros from fraction part of Ada output.
        "},{"location":"Scripts/cntools-changelog/#1101-2023-10-25","title":"[11.0.1] - 2023-10-25","text":""},{"location":"Scripts/cntools-changelog/#fixed_1","title":"Fixed","text":"
        • Fix display for Pool Cost and Pledge to accept integer as well as decimal format of ADA
        "},{"location":"Scripts/cntools-changelog/#1100-2023-07-05","title":"[11.0.0] - 2023-07-05","text":""},{"location":"Scripts/cntools-changelog/#changed","title":"Changed","text":"
        • CNTools now part of Koios brand
        "},{"location":"Scripts/cntools-changelog/#1040-2023-06-19","title":"[10.4.0] - 2023-06-19","text":""},{"location":"Scripts/cntools-changelog/#added","title":"Added","text":"
        • Support for SRV records
        • Support for cardano-node 8.1.1
        "},{"location":"Scripts/cntools-changelog/#1031-2023-06-03","title":"[10.3.1] - 2023-06-03","text":""},{"location":"Scripts/cntools-changelog/#fixed_2","title":"Fixed","text":"
        • Backup didn't properly exclude private keys
        "},{"location":"Scripts/cntools-changelog/#1030-2023-05-18","title":"[10.3.0] - 2023-05-18","text":""},{"location":"Scripts/cntools-changelog/#added_1","title":"Added","text":"
        • Support for voting as per CIP-0094
        "},{"location":"Scripts/cntools-changelog/#1023-2023-04-28","title":"[10.2.3] - 2023-04-28","text":""},{"location":"Scripts/cntools-changelog/#fixed_3","title":"Fixed","text":"
        • Additional HW signing fixes
        "},{"location":"Scripts/cntools-changelog/#1022-2023-04-24","title":"[10.2.2] - 2023-04-24","text":""},{"location":"Scripts/cntools-changelog/#fixed_4","title":"Fixed","text":"
        • Add special case handling for hardware wallets to use stake keys as witness for registering stake address
        "},{"location":"Scripts/cntools-changelog/#1021-2023-04-04","title":"[10.2.1] - 2023-04-04","text":""},{"location":"Scripts/cntools-changelog/#fixed_5","title":"Fixed","text":"
        • Moved test_koios call from cntools.library to cntools.sh
        "},{"location":"Scripts/cntools-changelog/#1020-2023-03-13","title":"[10.2.0] - 2023-03-13","text":""},{"location":"Scripts/cntools-changelog/#fixed_6","title":"Fixed","text":"
        • HW signing fix due to deprecated cardano-hw-cli sign call.
        • The check whether to use Koios API or not (env config) wasn't properly handled.
        "},{"location":"Scripts/cntools-changelog/#changed_1","title":"Changed","text":"
        • Disabled Koios for balance lookup to prefer local node check. In most circumstances this will be faster due to low latency. If needed, set WALLET_SELECTION_FILTER_LIMIT in cntools.sh to a lower limit to skip balance lookup on wallet selection if you have many wallets and delay is too long.
        "},{"location":"Scripts/cntools-changelog/#1011-2023-02-07","title":"[10.1.1] - 2023-02-07","text":""},{"location":"Scripts/cntools-changelog/#fixed_7","title":"Fixed","text":"
        • Disable dialog by default, it is an optional component - and no longer installed by default.
        "},{"location":"Scripts/cntools-changelog/#1010-2023-01-17","title":"[10.1.0] - 2023-01-17","text":""},{"location":"Scripts/cntools-changelog/#added_2","title":"Added","text":"
        • Hardware Wallets: Allow signing using cold keys for a pool, use it for rotating KES keys.
        "},{"location":"Scripts/cntools-changelog/#changed_2","title":"Changed","text":"
        • Keep deployment consistent with guild-deploy.sh
        "},{"location":"Scripts/cntools-changelog/#fixed_8","title":"Fixed","text":"
        • Fix parsing space in the name of assets
        "},{"location":"Scripts/cntools-changelog/#1005-2022-11-07","title":"[10.0.5] - 2022-11-07","text":""},{"location":"Scripts/cntools-changelog/#changed_3","title":"Changed","text":"
        • Updated testnet token registry to be reused for each non-mainnet network
        • Remove stale code for remote chain analysis
        "},{"location":"Scripts/cntools-changelog/#1004-2022-08-26","title":"[10.0.4] - 2022-08-26","text":""},{"location":"Scripts/cntools-changelog/#changed_4","title":"Changed","text":"
        • Allow pool cost to use fraction of ADA
        • Starts using koios-1.0.7 endpoints to fetch information
        "},{"location":"Scripts/cntools-changelog/#fixed_9","title":"Fixed","text":"
        • Fixes an issue with reusage of variable name and updated param name for cardano-cli.
        • Fix token minting and burn assets
        "},{"location":"Scripts/cntools-changelog/#1003-2022-08-16","title":"[10.0.3] - 2022-08-16","text":""},{"location":"Scripts/cntools-changelog/#fixed_10","title":"Fixed","text":"
        • env file was sourced after calling cntools.library, overriding test_koios result
        "},{"location":"Scripts/cntools-changelog/#1002-2022-08-13","title":"[10.0.2] - 2022-08-13","text":""},{"location":"Scripts/cntools-changelog/#fixed_11","title":"Fixed","text":"
        • Bump min cardano-hw-cli version to 1.10.0
        • Requires cardano-hw-cli to be present on online node for pool registration/modification to be able to transform tx if needed
        • Transform tx if needed before any witnessing/signing is done.
        • Wrong arguments in call to cardano-hw-cli for cddl-formatted tx
        "},{"location":"Scripts/cntools-changelog/#1001-2022-07-14","title":"[10.0.1] - 2022-07-14","text":""},{"location":"Scripts/cntools-changelog/#changed_5","title":"Changed","text":"
        • Transactions now built using cddl-format to ensure that the formatting of transaction adheres the ledger specs.
        • Default to mary era transaction building format for now.
        "},{"location":"Scripts/cntools-changelog/#fixed_12","title":"Fixed","text":"
        • Cold signing fix for pool registration / update. Last key was added twice when assemling witnesses.
        "},{"location":"Scripts/cntools-changelog/#1000-2022-06-28","title":"[10.0.0] - 2022-06-28","text":""},{"location":"Scripts/cntools-changelog/#added_3","title":"Added","text":"
        • Support for Vasil Fork
        • Preliminary support for Post HF updates (a short release will follow post fork in coming days)
        • Minimum version for Node bumped to 1.35.0
        "},{"location":"Scripts/cntools-changelog/#changed_6","title":"Changed","text":"
        • Pool > Rotate code now uses kes-periodinfo CLI query to get counter from node (fallback for Koios)
        • Pool > Show Info updated to include current KES counter
        • Update getEraIdentifier to include Babbage era
        "},{"location":"Scripts/cntools-changelog/#910-2022-05-11","title":"[9.1.0] - 2022-05-11","text":""},{"location":"Scripts/cntools-changelog/#changed_7","title":"Changed","text":"
        • Harmonize flow for reusing old wallet configuration on pool modification vs setting new wallets.
        • Remove the requirement for reward stake signing key in wallet registration/modification
        • Reward wallet no longer auto-delegated on pool registration just like for multi-owners.
        "},{"location":"Scripts/cntools-changelog/#9010-2022-05-03","title":"[9.0.10] - 2022-05-03","text":""},{"location":"Scripts/cntools-changelog/#fixed_13","title":"Fixed","text":"
        • Detect if cardano-hw-cli has execution permission
        "},{"location":"Scripts/cntools-changelog/#909-2022-03-14","title":"[9.0.9] - 2022-03-14","text":""},{"location":"Scripts/cntools-changelog/#changed_8","title":"Changed","text":"
        • Add version (-v) argument to cntools script to print current version
        "},{"location":"Scripts/cntools-changelog/#908-2022-03-07","title":"[9.0.8] - 2022-03-07","text":""},{"location":"Scripts/cntools-changelog/#changed_9","title":"Changed","text":"
        • Remove HASH_IDENTIFIER variable references (Ddz issue which required this seperation was resolved a while ago)
        • Replace NETWORKID check with NWMAGIC variable
        "},{"location":"Scripts/cntools-changelog/#907-2022-03-02","title":"[9.0.7] - 2022-03-02","text":""},{"location":"Scripts/cntools-changelog/#fixed_14","title":"Fixed","text":"
        • Call Test Koios function at start of CNTools, instead of calling by default every time env is sourced
        "},{"location":"Scripts/cntools-changelog/#906-2022-02-20","title":"[9.0.6] - 2022-02-20","text":""},{"location":"Scripts/cntools-changelog/#fixed_15","title":"Fixed","text":"
        • Fix for update check if not executed from default scripts folder.
        "},{"location":"Scripts/cntools-changelog/#905-2022-02-16","title":"[9.0.5] - 2022-02-16","text":""},{"location":"Scripts/cntools-changelog/#fixed_16","title":"Fixed","text":"
        • Script update code fixed to better handle in-app update. Would sometimes update but not source library correctly.
        "},{"location":"Scripts/cntools-changelog/#904-2022-02-14","title":"[9.0.4] - 2022-02-14","text":""},{"location":"Scripts/cntools-changelog/#fixed_17","title":"Fixed","text":"
        • Update request for pool_info endpoint from Koios
        "},{"location":"Scripts/cntools-changelog/#903-2022-02-01","title":"[9.0.3] - 2022-02-01","text":""},{"location":"Scripts/cntools-changelog/#added_4","title":"Added","text":"
        • Add a config variable TX_TTL to allow transaction to be valid (by default for 3600 slots) from the point of creation - previous default of 10 minutes on mainnet could be hit-and-miss with the state of network.
        "},{"location":"Scripts/cntools-changelog/#902-2022-01-22","title":"[9.0.2] - 2022-01-22","text":""},{"location":"Scripts/cntools-changelog/#changed_10","title":"Changed","text":"
        • Add decimal param to token metadata creator and increase ticker max length to 9 chars according to spec changes.
        "},{"location":"Scripts/cntools-changelog/#901-2022-01-17","title":"[9.0.1] - 2022-01-17","text":""},{"location":"Scripts/cntools-changelog/#changed_11","title":"Changed","text":"
        • Removing tool credits in offline metadata registry due to \"out of protocol\".
        "},{"location":"Scripts/cntools-changelog/#900-2022-01-10","title":"[9.0.0] - 2022-01-10","text":""},{"location":"Scripts/cntools-changelog/#changed_12","title":"Changed","text":"
        • Due to changes in cardano-node 1.33.x -> for utxo ledger lookup and previous heavy pool-params query, Koios API is now the default option for these lookups.
        • You can update KOIOS_API env variable to connect to a local instance of koios (open source and incentivises all to build a decentralised query layer) if you'd not like to connect to remote instance.
        • Visit the https://www.koios.rest/ for more information about Koios or check out the API documentation at https://api.koios.rest.
        • If you'd like to revert to old behaviour (use CLI which could be slow to retrieve UTxOs), you can set ENABLE_KOIOS environment variable to N.
        "},{"location":"Scripts/cntools-changelog/#882-2021-12-28","title":"[8.8.2] - 2021-12-28","text":""},{"location":"Scripts/cntools-changelog/#fixed_18","title":"Fixed","text":"
        • Transform txBody using canonical order before signing/witnessing in case of HW wallet.
        • Bump minimum HW wallet versions:
        • Ledger >= 3.0.0
        • Trezor >= 2.4.3
        • cardano-hw-cli >= 1.9.0
        "},{"location":"Scripts/cntools-changelog/#881-2021-12-18","title":"[8.8.1] - 2021-12-18","text":""},{"location":"Scripts/cntools-changelog/#fixed_19","title":"Fixed","text":"
        • Fallback to Mary era in build commands to keep ledger compatibility
        "},{"location":"Scripts/cntools-changelog/#880-2021-12-15","title":"[8.8.0] - 2021-12-15","text":""},{"location":"Scripts/cntools-changelog/#fixed_20","title":"Fixed","text":"
        • Asset handling after cardano-node 1.32.1 version bump. ascii -> hex change in cardano-cli.
        "},{"location":"Scripts/cntools-changelog/#873-2021-11-30","title":"[8.7.3] - 2021-11-30","text":""},{"location":"Scripts/cntools-changelog/#fixed_21","title":"Fixed","text":"
        • Remove stale cntools.config comments
        "},{"location":"Scripts/cntools-changelog/#872-2021-11-08","title":"[8.7.2] - 2021-11-08","text":""},{"location":"Scripts/cntools-changelog/#changed_13","title":"Changed","text":"
        • Remove check if pool reward wallet is a hw wallet, enforce that its also a multi-owner to the pool
        "},{"location":"Scripts/cntools-changelog/#871-2021-11-04","title":"[8.7.1] - 2021-11-04","text":""},{"location":"Scripts/cntools-changelog/#fixed_22","title":"Fixed","text":"
        • Balance check of wrong wallet in certain circumstances for pool modify
        "},{"location":"Scripts/cntools-changelog/#870-2021-10-05","title":"[8.7.0] - 2021-10-05","text":""},{"location":"Scripts/cntools-changelog/#changed_14","title":"Changed","text":"
        • CNTools configuration moved from cntools.config to cntools.sh
        "},{"location":"Scripts/cntools-changelog/#866-2021-09-26","title":"[8.6.6] - 2021-09-26","text":""},{"location":"Scripts/cntools-changelog/#fixed_23","title":"Fixed","text":"
        • Pool rotation date calculation fix, 8.6.4 didn't properly fix it
        "},{"location":"Scripts/cntools-changelog/#865-2021-09-15","title":"[8.6.5] - 2021-09-15","text":""},{"location":"Scripts/cntools-changelog/#fixed_24","title":"Fixed","text":"
        • Minimum utxo output calculation post Alonzo
        "},{"location":"Scripts/cntools-changelog/#864-2021-09-14","title":"[8.6.4] - 2021-09-14","text":""},{"location":"Scripts/cntools-changelog/#fixed_25","title":"Fixed","text":"
        • Pool rotation date calculation fix (display only)
        "},{"location":"Scripts/cntools-changelog/#863-2021-08-31","title":"[8.6.3] - 2021-08-31","text":""},{"location":"Scripts/cntools-changelog/#fixed_26","title":"Fixed","text":"
        • Pool retire fix
        "},{"location":"Scripts/cntools-changelog/#862-2021-08-30","title":"[8.6.2] - 2021-08-30","text":""},{"location":"Scripts/cntools-changelog/#fixed_27","title":"Fixed","text":"
        • Revert --whole-utxo flag, as it returns all address and will not accept --address
        "},{"location":"Scripts/cntools-changelog/#861-2021-08-27","title":"[8.6.1] - 2021-08-27","text":""},{"location":"Scripts/cntools-changelog/#changed_15","title":"Changed","text":"
        • Alonzo related changes for era and minimum utxo.
        "},{"location":"Scripts/cntools-changelog/#860-2021-08-27","title":"[8.6.0] - 2021-08-27","text":""},{"location":"Scripts/cntools-changelog/#changed_16","title":"Changed","text":"
        • Add --whole-utxo flag when query UTxO, as required by cardano-cli 1.28, to keep behaviour same as before.
        • Baseline compatibility with 1.29
        "},{"location":"Scripts/cntools-changelog/#8415-2021-07-15","title":"[8.4.15] - 2021-07-15","text":""},{"location":"Scripts/cntools-changelog/#changed_17","title":"Changed","text":"
        • Switch default to 'No' adding a message when sending funds
        "},{"location":"Scripts/cntools-changelog/#8414-2021-07-14","title":"[8.4.14] - 2021-07-14","text":""},{"location":"Scripts/cntools-changelog/#fixed_28","title":"Fixed","text":"
        • Fix for upcoming unreleased dbsync rest endpoint
        "},{"location":"Scripts/cntools-changelog/#8413-2021-07-08","title":"[8.4.13] - 2021-07-08","text":""},{"location":"Scripts/cntools-changelog/#changed_18","title":"Changed","text":"
        • Documentation references updated to new site layout
        "},{"location":"Scripts/cntools-changelog/#8412-2021-06-28","title":"[8.4.12] - 2021-06-28","text":""},{"location":"Scripts/cntools-changelog/#fixed_29","title":"Fixed","text":"
        • Pre-source env in offline/online mode for checkUpdate depending on argument provided to cntools.sh
        "},{"location":"Scripts/cntools-changelog/#8411-2021-06-25","title":"[8.4.11] - 2021-06-25","text":""},{"location":"Scripts/cntools-changelog/#changed_19","title":"Changed","text":"
        • KES calculation moved from CNTools & gLiveView into a common function in env file. For online mode node metrics is used for KES expiration instead of static pool KES start period.
        • General message metadata support added to 'funds >> send' according to CIP-0020.
        "},{"location":"Scripts/cntools-changelog/#8410-2021-06-15","title":"[8.4.10] - 2021-06-15","text":""},{"location":"Scripts/cntools-changelog/#fixed_30","title":"Fixed","text":"
        • Fix display issue for CLI that were upgraded to Alonzo-Blue networks
        "},{"location":"Scripts/cntools-changelog/#849-2021-06-15","title":"[8.4.9] - 2021-06-15","text":""},{"location":"Scripts/cntools-changelog/#changed_20","title":"Changed","text":"
        • Handle Various updates to grest queries [disabled] to make them independent of instances. Note: Version incremented thrice on PR branch itself
        "},{"location":"Scripts/cntools-changelog/#846-2021-06-04","title":"[8.4.6] - 2021-06-04","text":""},{"location":"Scripts/cntools-changelog/#fixed_31","title":"Fixed","text":"
        • Add balance check for main pool owner, that there is at least one utxo available
        • Allow utxo without lovelace (for future when we might have tokens on utxo without Ada, like on Alonzo TestNet)
        • pctToFraction helper function didn't properly handle 0 value
        "},{"location":"Scripts/cntools-changelog/#845-2021-05-31","title":"[8.4.5] - 2021-05-31","text":""},{"location":"Scripts/cntools-changelog/#fixed_32","title":"Fixed","text":"
        • Reset IFS at main loop, fixes invalid tip difference on home screen after going to Block > Summary
        "},{"location":"Scripts/cntools-changelog/#844-2021-05-19","title":"[8.4.4] - 2021-05-19","text":""},{"location":"Scripts/cntools-changelog/#fixed_33","title":"Fixed","text":"
        • Typo in Ledger ledger version requirement error and make it clearer that its the app version, not fw version.
        "},{"location":"Scripts/cntools-changelog/#843-2021-05-17","title":"[8.4.3] - 2021-05-17","text":""},{"location":"Scripts/cntools-changelog/#fixed_34","title":"Fixed","text":"
        • Token Mint/Burn script file signing not completely removed in all places (1.27.0 change)
        "},{"location":"Scripts/cntools-changelog/#842-2021-05-16","title":"[8.4.2] - 2021-05-16","text":""},{"location":"Scripts/cntools-changelog/#fixed_35","title":"Fixed","text":"
        • cardano-hw-cli version limited to 1.2.0 for current Trezor fw v2.3.6. Please manually downgrade version, available at https://github.com/vacuumlabs/cardano-hw-cli/releases , placing files in $HOME/bin/cardano-hw-cli
        "},{"location":"Scripts/cntools-changelog/#841-2021-05-16","title":"[8.4.1] - 2021-05-16","text":""},{"location":"Scripts/cntools-changelog/#changed_21","title":"Changed","text":"
        • Wallet >> Show no longer require payment.vkey to be present, as long as either payment or base .addr file(s) exist
        "},{"location":"Scripts/cntools-changelog/#840-2021-05-16","title":"[8.4.0] - 2021-05-16","text":""},{"location":"Scripts/cntools-changelog/#added_5","title":"Added","text":"
        • Compatibility with cardano-address 3.4.0 (while retaining support for 2.1.0)
        "},{"location":"Scripts/cntools-changelog/#830-2021-05-15","title":"[8.3.0] - 2021-05-15","text":""},{"location":"Scripts/cntools-changelog/#added_6","title":"Added","text":"
        • New env variable called PGREST_API and if set and reachable, used instead of local node queries and for advanced modes
        • New library function isPoolRegistered() for verifying if a pool is registered or not using either simple reg cert file detection (if REST API not set/reachable) or proper dbsync lookup using REST API. Used by Pool >> Show|List|Register|Modify
        • Option to mint/burn assets in hybrid mode
        • Transaction >> Sign now automatically tries to find the correct signing keys instead of having the user manually select the correct files
        • ** ADVANCED FEATURE ** - Chain Queries
        • Menu is dynamically built based on queries(JSON files) in DBSYNC_QUERY_FOLDER (env variable, default files/dbsync/queries) three levels deep.
        • A download option lets the user download the latest uploaded queries on Guild Operators GitHub site.
        • Query files
          • Contains menu path, description, variables, and queries(multiple)
          • Executes a predefined DBSync RPC/function through PostgREST API
          • Variables used in RPC call can either be user input, CNTools variables like EKG metrics, or an item in the result from a previous query(in the same query file)
          • Result from RPC call can either be printed or silent(only to be used for later query)
          • Output is printed as JSON
        "},{"location":"Scripts/cntools-changelog/#changed_22","title":"Changed","text":"
        • Minimum node version bumped to 1.27.0
        • Menu has been re-designed with both back & home options. Instead of returning to home menu after the completed operation user is returned to the last menu.
        • Pool >> Show now use PostgREST API(if set), or the new pool-params cli query as fallback method.
        "},{"location":"Scripts/cntools-changelog/#fixed_36","title":"Fixed","text":"
        • 1.27.0 introduced a few changes in CLI commands for assets minting/burning
        "},{"location":"Scripts/cntools-changelog/#822-2021-05-02","title":"[8.2.2] - 2021-05-02","text":""},{"location":"Scripts/cntools-changelog/#fixed_37","title":"Fixed","text":"
        • KES expiration date fix
        "},{"location":"Scripts/cntools-changelog/#821-2021-04-26","title":"[8.2.1] - 2021-04-26","text":""},{"location":"Scripts/cntools-changelog/#changed_23","title":"Changed","text":"
        • Make use of UPDATE_CHECK environment variable to skip any checks to internet by default
        "},{"location":"Scripts/cntools-changelog/#820-2021-04-18","title":"[8.2.0] - 2021-04-18","text":""},{"location":"Scripts/cntools-changelog/#added_7","title":"Added","text":"
        • Ability to create & update a Cardano Token Registry submission JSON file
        • Requires 'token-metadata-creator' tool, instructions to download/build this tool added to Guild Operators documentation:
        • https://cardano-community.github.io/guild-operators/Build/offchainMetadataTools
        • Token Registry lookup in Wallet >> Show
        • Token asset fingerprint generation according to https://github.com/cardano-foundation/CIPs/pull/64
        "},{"location":"Scripts/cntools-changelog/#changed_24","title":"Changed","text":"
        • Redesigned input handling to be more flexible and improve output
        "},{"location":"Scripts/cntools-changelog/#816-2021-04-14","title":"[8.1.6] - 2021-04-14","text":""},{"location":"Scripts/cntools-changelog/#changed_25","title":"Changed","text":"
        • Metadata creation now offer the choice to add a metadata JSON scaffold to see the required structure
        "},{"location":"Scripts/cntools-changelog/#fixed_38","title":"Fixed","text":"
        • Fixed metadata creation entering JSON metadata through text editor
        "},{"location":"Scripts/cntools-changelog/#815-2021-04-09","title":"[8.1.5] - 2021-04-09","text":""},{"location":"Scripts/cntools-changelog/#fixed_39","title":"Fixed","text":"
        • Offline mode fix to ignore error when sourcing env
        "},{"location":"Scripts/cntools-changelog/#814-2021-04-05","title":"[8.1.4] - 2021-04-05","text":""},{"location":"Scripts/cntools-changelog/#changed_26","title":"Changed","text":"
        • Enhanced minimum utxo calculation (credits to Martin providing this)
        • based on calculations from https://github.com/input-output-hk/cardano-ledger-specs/blob/master/doc/explanations/min-utxo.rst
        • Validation of wallet address balance on transactions improved
        "},{"location":"Scripts/cntools-changelog/#813-2021-04-01","title":"[8.1.3] - 2021-04-01","text":""},{"location":"Scripts/cntools-changelog/#fixed_40","title":"Fixed","text":"
        • Alignment fix in blocks table
        "},{"location":"Scripts/cntools-changelog/#812-2021-03-31","title":"[8.1.2] - 2021-03-31","text":""},{"location":"Scripts/cntools-changelog/#changed_27","title":"Changed","text":"
        • Manual CNTools update replaced with automatic by asking to update on startup like the rest of the scripts in the guild repository.
        • Changelog truncated up to v6.0.0. Minor and patch version changelog entries merged with next major release changelog.
        "},{"location":"Scripts/cntools-changelog/#811-2021-03-30","title":"[8.1.1] - 2021-03-30","text":""},{"location":"Scripts/cntools-changelog/#fixed_41","title":"Fixed","text":"
        • Relay registration condition
        • Version number
        "},{"location":"Scripts/cntools-changelog/#810-2021-03-26","title":"[8.1.0] - 2021-03-26","text":""},{"location":"Scripts/cntools-changelog/#added_8","title":"Added","text":"
        • IPv6 support in pool registration/modification
        "},{"location":"Scripts/cntools-changelog/#changed_28","title":"Changed","text":"
        • Wallet delegation now lets you specify Pool ID in addition to local CNTools pool instead of previous cold.vkey cbor string
        • A couple of functions regarding number validation moved to common env file
        • Code adapted for changes in ledger-state dump used by 'Pool >> Show'
        "},{"location":"Scripts/cntools-changelog/#fixed_42","title":"Fixed","text":"
        • Backup & restore now exclude gpg encrypted keys from online backup and suppression of false alarms
        "},{"location":"Scripts/cntools-changelog/#802-2021-03-15","title":"[8.0.2] - 2021-03-15","text":""},{"location":"Scripts/cntools-changelog/#fixed_43","title":"Fixed","text":"
        • Bump cardano-hw-cli minimum version to 1.2.0
        • Add Ledger Cardano app version check with minimum enforced version of 2.2.0
        • Add Trezor firmware check with minimum enforced version of 2.3.6
        "},{"location":"Scripts/cntools-changelog/#801-2021-03-05","title":"[8.0.1] - 2021-03-05","text":""},{"location":"Scripts/cntools-changelog/#fixed_44","title":"Fixed","text":"
        • Add BASH version check, version 4.4 or newer required
        "},{"location":"Scripts/cntools-changelog/#800-2021-02-28","title":"[8.0.0] - 2021-02-28","text":""},{"location":"Scripts/cntools-changelog/#added_9","title":"Added","text":"
        • Multi Asset Token compatibility added throughout all CNTools operations.
        • Sending Ada and custom tokens is done through the normal 'Funds >> Send' operation
        "},{"location":"Scripts/cntools-changelog/#changed_29","title":"Changed","text":"
        • Metadata moved to a new Advanced section used for devs/advanced operations not normally used by SPOs.
        • Accessed by enabling developer/advanced mode in cntools.config or by providing runtime flag '-a'
        • Redesign of backup and restore.
        • Deletion of private keys moved from backup to new section under Advanced
        • Backup now only contain content of priv folder (files & scripts folders dropped)
        • Restore operation now restore directly to priv folder instead of a random user selected folder to make restore easier and better. Before restore, a new full backup of priv folder is made and stored encrypted in priv/archive
        "},{"location":"Scripts/cntools-changelog/#fixed_45","title":"Fixed","text":"
        • JQ limitation workaround for large numbers
        • Dialog compatibility improvement by preventing dialog launching a subshell on some systems causing dialog not to run
        "},{"location":"Scripts/cntools-changelog/#716-2021-02-10","title":"[7.1.6] - 2021-02-10","text":"
        • Update curl commands when file isnt downloaded correctly (to give correct return code)
        "},{"location":"Scripts/cntools-changelog/#715-2021-02-03","title":"[7.1.5] - 2021-02-03","text":""},{"location":"Scripts/cntools-changelog/#changed_30","title":"Changed","text":"
        • Guild Announcement/Support Telegram channel added to CNTools GUI
        "},{"location":"Scripts/cntools-changelog/#fixed_46","title":"Fixed","text":"
        • Fix for a special case using an incomplete wallet (missing stake keys)
        "},{"location":"Scripts/cntools-changelog/#714-2021-02-01","title":"[7.1.4] - 2021-02-01","text":""},{"location":"Scripts/cntools-changelog/#fixed_47","title":"Fixed","text":"
        • Typo in function name after harmonization between scripts
        "},{"location":"Scripts/cntools-changelog/#713-2021-01-30","title":"[7.1.3] - 2021-01-30","text":""},{"location":"Scripts/cntools-changelog/#fixed_48","title":"Fixed","text":"
        • Vacuumlabs cardano-hw-cli 1.1.3 support, now the minimum supported version
        • Improved error handling
        "},{"location":"Scripts/cntools-changelog/#711-2021-01-29","title":"[7.1.1] - 2021-01-29","text":""},{"location":"Scripts/cntools-changelog/#changed_31","title":"Changed","text":"
        • Minor change to future update notification for common env file
        "},{"location":"Scripts/cntools-changelog/#710-2021-01-29","title":"[7.1.0] - 2021-01-29","text":""},{"location":"Scripts/cntools-changelog/#changed_32","title":"Changed","text":"
        • Remove ChainDB metrics references to align with cardano-node 1.25.1
        • Moved some functions to env for reusability between tools
        "},{"location":"Scripts/cntools-changelog/#702-2021-01-17","title":"[7.0.2] - 2021-01-17","text":""},{"location":"Scripts/cntools-changelog/#changed_33","title":"Changed","text":"
        • Re-add the option in offline workflow to use wallet folder that only contains stake keys for multi-owner pools
        "},{"location":"Scripts/cntools-changelog/#fixed_49","title":"Fixed","text":"
        • Verification of signing key in offline mode for extended signing keys (mnemonics imported wallets)
        "},{"location":"Scripts/cntools-changelog/#701-2021-01-13","title":"[7.0.1] - 2021-01-13","text":""},{"location":"Scripts/cntools-changelog/#changed_34","title":"Changed","text":"
        • Add prompt before updating common env file, instead of auto-update
        "},{"location":"Scripts/cntools-changelog/#700-2021-01-11","title":"[7.0.0] - 2021-01-11","text":"

        Though mostly unchanged in the user interface, this is a major update with most of the code re-written/touched in the back-end. Only the most noticeable changes added to changelog.

        "},{"location":"Scripts/cntools-changelog/#added_10","title":"Added","text":"
        • HW Wallet support through Vacuumlabs cardano-hw-cli (Ledger Nano X/S & Trezor T)
        • Vacuumlabs cardano-hw-cli added as build option to prereqs.sh, option '-w' incl Ledger udev rules. Software from Vacuumlabs and Ledger app still early in development and may contain limitations that require workarounds. Users are recommended to familiarise their usage using test wallets first.
        • Because of HW wallet support, transaction signing has been re-designed. For CLI and HW wallet pool reg, raw tx is first witnessed by all signing keys separately and then assembled and signed instead of signing directly with all signing keys. But for all other HW wallet transactions, signing is done directly without first witnessing.
        • Requires updated Cardano app in Ledger/Trezor set to be released in January 2021 to use in pool registration/modification.
        • Option added to disable Dialog for file/dir input in cntools.config
        "},{"location":"Scripts/cntools-changelog/#changed_35","title":"Changed","text":"
        • Logging completely re-designed from the ground. Previous selective logging wasn't very useful. All output(almost) now outputted to both the screen and to a timestamped log file. One log file is created per CNTools session. Old log file archived in logs/archive subfolder and last 10 log files kept, rest is pruned on CNTools startup.
        • DEBUG : Verbose output, most output printed on screen is logged as debug messages except explicitly disabled, like menu printing.
        • INFO : Informational and the most important output.
        • ACTION : e.g cardano-cli executions etc
        • ERROR : error messages and stderr output
        • Verbosity setting in cntools.config removed.
        • Offline workflow now use a single JSON transaction file holding all data needed. This allows us to bake in additional data in the JSON file in addition to the tx body to make it much more clear what the offline transaction do. Like signing key verification, transaction data like fee, source wallet, pool name etc. It also lets the user know on offline computer what signing keys is needed to sign the transaction.
        • Sign Tx moved to Transaction >> Sign
        • Submit Tx moved to Transaction >> Submit
        "},{"location":"Scripts/cntools-changelog/#fixed_50","title":"Fixed","text":"
        • Remove intermediate prompt for showing changelog, so that it's directly visible.
        "},{"location":"Scripts/cntools-changelog/#631-2020-12-14","title":"[6.3.1] - 2020-12-14","text":""},{"location":"Scripts/cntools-changelog/#fixed_51","title":"Fixed","text":"
        • Array expansion not correctly handled for multi-owner signing keys
        • KES rotation output fix in OFFLINE mode, op.cert should be copied, not cold.counter
        • Output and file explorer workflow redesigned a bit for a better flow
        • formatLovelace() thousand separator fix after forcing locale to C.UTF-8 in env
        • formatAda() function added to pretty print pledge and cost w/o Lovelace
        "},{"location":"Scripts/cntools-changelog/#630-2020-12-03","title":"[6.3.0] - 2020-12-03","text":""},{"location":"Scripts/cntools-changelog/#changed_36","title":"Changed","text":"
        • printTable function replaced with bash printf due to compatibility issues
        • Improved workflow in pool registration/modification for relays and multi-owner.
        • Standardized names for wallet and pool files/folders moved to env file from cntools.config
        • Compatibility with 1.24.2 node (accomodate ledger schema and CLI changes), use 1.24.2 as baseline
        • Move version check to env
        "},{"location":"Scripts/cntools-changelog/#fixed_52","title":"Fixed","text":"
        • Error output for prerequisite checks
        "},{"location":"Scripts/cntools-changelog/#621-2020-11-28","title":"[6.2.1] - 2020-11-28","text":""},{"location":"Scripts/cntools-changelog/#changed_37","title":"Changed","text":"
        • Compatibility changes for cardano-node 1.23.0, now minimum version to run CNTools 6.2.1
        • Cleanup of old code
        "},{"location":"Scripts/cntools-changelog/#620-alpha-branch","title":"[6.2.0] - (alpha branch)","text":""},{"location":"Scripts/cntools-changelog/#added_11","title":"Added","text":"
        • Ability to post metadata on-chain, e.g. (but not limited to) Adams https://vote.crypto2099.io/
        "},{"location":"Scripts/cntools-changelog/#changed_38","title":"Changed","text":"
        • Blocks view updated to adapt to the added CNCLI integration and changes made to block collector(logMonitor)
        • CNCLI
        • Log Monitor
        • chattr file locking now optional to use, a new setting in cntools.config added for it.
        "},{"location":"Scripts/cntools-changelog/#fixed_53","title":"Fixed","text":"
        • unnecessary bech32 conversion in wallet import (non-breaking)
        "},{"location":"Scripts/cntools-changelog/#610-2020-10-22","title":"[6.1.0] - 2020-10-22","text":""},{"location":"Scripts/cntools-changelog/#added_12","title":"Added","text":"
        • Wallet de-registration with key deposit refund (new cntools.config parameter, WALLET_STAKE_DEREG_FILENAME)
        • Default values loaded for all config variables if omitted/missing in cntools.config
        "},{"location":"Scripts/cntools-changelog/#changed_39","title":"Changed","text":"
        • Prometheus node metrics replaced with EKG
        • Allow and handle missing pool.config in pool >> modify and show
        • Cancel and return added in several helper functions if cardano-cli execution fails
        • Various tweaks to the output
        "},{"location":"Scripts/cntools-changelog/#fixed_54","title":"Fixed","text":"
        • Script execution permissions after internal update
        • Handle redirect in curl metadata fetch
        "},{"location":"Scripts/cntools-changelog/#603-2020-10-16","title":"[6.0.3] - 2020-10-16","text":""},{"location":"Scripts/cntools-changelog/#fixed_55","title":"Fixed","text":"
        • Shelley epoch transition calculation used the wrong byron metric in the calculation
        "},{"location":"Scripts/cntools-changelog/#602-2020-10-16","title":"[6.0.2] - 2020-10-16","text":""},{"location":"Scripts/cntools-changelog/#fixed_56","title":"Fixed","text":"
        • Internal update had the wrong path to env file breaking automatic update, please use prereqs.sh to update
        • Fix in 6.0.1 broke pool id retrieval, now compatible with both pre and post cardano-node 1.21.2 syntax.
        "},{"location":"Scripts/cntools-changelog/#601-2020-10-16","title":"[6.0.1] - 2020-10-16","text":""},{"location":"Scripts/cntools-changelog/#fixed_57","title":"Fixed","text":"
        • As per change to cardano-cli syntax, pool ID requires --cold-verification-key-file instead of --verification-key-file
        "},{"location":"Scripts/cntools-changelog/#600-2020-10-15","title":"[6.0.0] - 2020-10-15","text":"

        This is a major release with a lot of changes. It is highly recommended that you familiarise yourself with the usage for Hybrid or Online v/s Offline mode on a testnet environment before doing it on production. Please visit https://cardano-community.github.io/guild-operators/upgrade for details.

        "},{"location":"Scripts/cntools-changelog/#added_13","title":"Added","text":"
        • Allow CNTools to operate in offline mode. Offline features include:
        • Simplified Walet Show/List menu
        • Wallet delete without balance check option
        • Pool KES Rotation
        • Sign a staging transaction.
        • Allow creation of staging tx files using ttl as input in an online/offline-hybrid mode, that can be sent to offline server to sign.
        • To Transfer Funds
        • Withdraw Rewards
        • Delegate
        • Register/Modify/Retire pool
        • Allow import of a signed transaction to submit in online mode
        • Allow import of 15/24 words based wallets. Note that you'd need cardano-address and bech32 in yout $PATH to use this feature (available if you rebuild cardano-node using updated cabal-build-all.sh), reusing guide from @ilap.
        • Backup now offer the ability to create an online backup without wallet payment/stake and pool cold sign keys
        • Regular(offline) backup now display a warning if wallet payment/stake and pool cold sign keys are missing due to being deleted manually or by previous backup
        • Retire notification in pool >> show
        • Sanity check before launching a second instance of cnode.sh
        • Doc update to run cnode.sh as a systemd service
        • Use secure remove (srm) when available when deleting files.
        • Balance check notification added before wallet selection menus are shown to know that work is done in the background
        • Ability to select a different pool owner and reward wallet
        • Multi-owner support using stake vkey/skey files
        • Added TIMEOUT_LEDGER_STATE(default 300s) in cntools.config to be used instead of static 60 seconds for querying shelley ledger-state.
        • Option to delete private keys after successful backup
        • itnRewards.sh script to claim ITN rewards incl docs update
        • More explicit error messages at startup
        • Basic sanity checks for socket file
        • Backup & Restore of wallets, pools and configuration files
        • External KES rotation script using CNTools library
        • Add few flags to control prereqs to allow skipping overwriting files, deploying OS packages, etc
        • cntools.sh: Drop an error if log not found, indicating config with no JSON being used
        "},{"location":"Scripts/cntools-changelog/#changed_40","title":"Changed","text":"
        • Improved trap/exit handling
        • Allow thousand separator(,) in user input for sending ADA and pledge/cost at pool registration to make it easier to count the zeros
        • User input for files and directories now open a dialog gui to make it easier to find the correct path
        • CNTools now uses and works with cardano-node 1.19.0, please upgrade if you're not using this version.
        • Use manual calculation based on slot tip to get KES period
        • Removed ledger dump dependency from Pool Register, Modify, Retire and List.
        • The cost of the ledger dump is too high, replaced with a simple check if pools registration certificate exist in pool folder
        • Pool >> Show|Delegators are now the only options dumping the ledger-state
        • Wallet vkeys no longer encrypted as skeys are the only ones in need of protection
        • Update command change (change applied after this release is active):
        • Minor/Patch release: it will warn, backup and replace CNTools script files including cntools.config
        • Major release: No change, prompt user to backup and run prereqs.sh according to instructions.
        • Troubleshooting improvements:
        • Split 'config in json format' and 'hasPrometheus' checks
        • Output node sync stats if Shelley transition epoch is to be calculated
        • Protocol parameters output check to give an improved error message
        • Pool >> Show view updated to show modified pool values if Pool >> Modify has been used to update pool parameters
        • The section has also been updated to make it a little bit easier to read
        • Pool >> Delegators view also use updated pledge value if a pool modification has been registered to check if pledge is met
        • Use mainnet as default, since other testnets are either retired or not being maintained :(
        • Backup original files when doing upgrades, so that users do not lose their changes.
        • Major update description updated
        • env file update removed from minor update
        • Prometheus metrics used for various functions and now required to run CNTools, enabled by default
        • Changed references to ptn0 to generalize the usage
        • Change CNTools changelog heading format - +1 sublevels for headings (used by newer documentation)
        • Delegators previously displayed in Pool >> Show now moved to its own menu option This is to de-clutter and because it takes time to parse this data from ledger-state
        • stake.cert no longer encrypted in wallet
        • Meta description now has a limit of 255 chars to match smash server limit
        • ledger-state timeout increased to 60s
        • Update ptn0 config to align with hydra config as much as possible, while keeping trace options on
        • moved update check to be one of the first things CNTools does after start to be able to show critical changes before anything else runs.
        • Parse node logs to check the transition from Byron to shelley era, and save the epoch for transition in db folder. This is required for calculating KES keys.
        • Please make sure to use config files created by the prereqs.sh, or enable JSON loggers for your config.
        • Update cnode.sh.templ to archive logs every time node is restart, this ensures that we're not searching for previous log history when network was changed. Network being changed would automatically deduce db folder was deleted.
        • Update default network to MC3
        "},{"location":"Scripts/cntools-changelog/#removed","title":"Removed","text":"
        • Pool >> Delegators removed.
        • If/when a better option than dumping and parsing ledger-state dump arise re-adding it will be considered.
        • Utilize the community explorers listed at https://cardano-community.github.io/support-faq/explorers
        • POOL_PLEDGECERT_FILENAME removed from config, WALLET_DELEGCERT_FILENAME is used instead for delegation cert to pool, no need to keep a separate cert in pool folder for this, its the wallet that is delegated.
        • Redundant sections in guide
        • Stale delegate.counter
        "},{"location":"Scripts/cntools-changelog/#fixed_58","title":"Fixed","text":"
        • Check pool >> show stake distribution showing up as always 0.
        • KES expiration calculation
        • Slot interval calculation
        • Custom vname replacement(when using prereqs.sh -t) fix for internal update
        • Pool registration and de-registration certificates removed in case of retire/re-registration
        • KES Expiry to use KES Period instead of Epoch duration
        • Block Collector script adapted for cardano-node 1.19.0.
        • Block hash is now truncated in log, issue https://github.com/input-output-hk/cardano-node/issues/1738
        • High cpu usage reported in a few cases when running Block Collector
        • Depending on log level, parsing and byte64 enc each entry with jq could potentially put high load on weaker systems. Replaced with grep to only parse entries containing specific traces.
        • Docs for creating systemd block collector service file updated to include user env in run command
        • cardano-node 1.19.0 introduced an issue that required us to use KES as current - 1 while rotating.
        • A new getPoolID helper function added to extract both hex and bech32 pool ID
        • Added --output-format hex when extracting pool ID in hex format
        • A new pool.id-bech32 file gets created if cold.vkey is available and decrypted
        • Added error check to see if cardano-cli is in $PATH before continuing.
        • Backup & Restore paths were failing on machines due to alnum class availability on certain interpreters.
        • Rewards were not counted in stake and pledge
        • Removed +i file locking on .addr files when using Wallet >> Encrypt as these are re-generated from keys and need to be writable
        • Balance check added to Funds >> Withdraw for base address as this is used to pay the withdraw transaction fee
        • Resolve issue with Multi Owner causing an error with new pool registration (error was due to quotes)
        • Mainnet uses dedicated condition for slot checks
        • Timeout moved to a variable in cntools.library
        • KES Calculation for current KES period and KES expiration date Please re-check expiration date using Pool >> Show
        • calc_slots to be network independent
        • prom_host should be calculated from config file, instead of having to update a config
        • Minor typo in menu
        • Parse Config for virtual forks, which adds supports for MC4
        • CNTools block collector fix
        • column application added as a prereq, bsdmainutils/util-linux
        • cntoolsBlockCollector.sh get epoch using function
        • KES count was not showing up in Katip
        • Funds -> Delegation was broken as per recent changes in 1.17, corrected key type for delegation certificate
        • Pool >> Show delegator rewards parsing from ledger-state
        • Slot sync format improvement
        • kesExpiration function to use 17 fixed byron transition epochs
        "},{"location":"Scripts/cntools-changelog/#500-2020-07-20","title":"[5.0.0] - 2020-07-20","text":""},{"location":"Scripts/cntools-changelog/#added_14","title":"Added","text":"
        • HASH_IDENTIFIER where applicable to differentiate between network modes for commands used, required due to legacy Byron considerations
        • add ptn0-praos.json and ptn0-combinator.json to reduce confusion between formats, make prereqs default to combinator, and accept p argument to indicate praos mode.
        • cardano-node 1.16.0 refers to txhash using quotes, sed them out
        • show what's new at startup after update
        • file size check for pool metadata file
        • Add nonce in pool metadata JSON to keep registration attempts unique, avoiding one hash pointing to multiple URLs
        • Change default network to mainnet_candidate, and add second argument (g) to run prereqs against guild network
        • allow the use of pre-existing metadata from URL when registering or modifying pool
        • minimum pool cost check against protocol
        • Refresh option to home screen
        • Ability to register multiple relay DNS A records as well as a mix of DNS A and IPv4
        • Valid for pool registration and modification
        "},{"location":"Scripts/cntools-changelog/#changed_41","title":"Changed","text":"
        • Default config switched to combinator instead of testnet
        • Start maintaining seperate versions of praos and combinator config files.
        • Add 10s timeout to wget commmands in case of issue
        • timestamp added to pool metadata file to make every creation unique
        • Cancel shortcut changed from [c] to [Esc]
        • Default pool cost from 256 -> 400
        • slotinterval calculation to include decentralisation parameter
        • mainnet candidate compatible slot calculation, 17 fixed byron transition epochs (needs to be fixed for mainnet)
        • Pool metadata information to copy file as-is as well as wait for keypress to make sure file is copied before proceeding with registration.
        • Now use internal table builder to display previous relays
        • Instead of giving relays from previous registration as default values it will now ask if you want to re-register relays exactly as before to minimize steps and complexity
        "},{"location":"Scripts/cntools-changelog/#removed_1","title":"Removed","text":"
        • Delete cntools-updater script
        • NODE_SOCKET_PATH config parameter(replaced by CARDANO_NODE_SOCKET_PATH)
        "},{"location":"Scripts/cntools-changelog/#fixed_59","title":"Fixed","text":"
        • Slots reference was mixing up for shelley testnet in absence of a combinator network
        • numfmt dependency removed in favor of printf formatting
        • Vkey delegation fix due to json format switch
        • ADA not displayed in a couple of the wallet selection menus
        • KES calculation support for both MC and Shelley Testnet
        • Slot tip reference calculation for shelley testnet
        "},{"location":"Scripts/cntools-changelog/#400-2020-07-13","title":"[4.0.0] - 2020-07-13","text":""},{"location":"Scripts/cntools-changelog/#added_15","title":"Added","text":"
        • Add PROTOCOL_IDENTIFIER and NETWORK_IDENTIFIER instead of harcoded entries for combinator v/s TPraos & testnet v/s magic differentiators respectively.
        • Keep both ptn0.yaml and ptn0-combinator.yaml to keep validity with mainnet-combinator
        "},{"location":"Scripts/cntools-changelog/#changed_42","title":"Changed","text":"
        • Revert back default for Public network to Shelley_Testnet as per https://t.me/CardanoStakePoolWorkgroup/282606
        "},{"location":"Scripts/cntools-changelog/#300-2020-07-12","title":"[3.0.0] - 2020-07-12","text":""},{"location":"Scripts/cntools-changelog/#added_16","title":"Added","text":"
        • Basic health check data on main menu
        • Epoch, time until next epoch, node tip vs calculated reference tip and a warning if node is lagging behind.
        • Address era and encoding to Wallet >> Show
        "},{"location":"Scripts/cntools-changelog/#changed_43","title":"Changed","text":"
        • Release 2.1.1 included a change to env file and thus require a major version bump.
        • Modified output on Update screen slightly.
        • KES calculation, now take into account the byron era and the transition period until shelley start
        • Credit to Martin @ ATADA for inspiration on how to calculate this
        "},{"location":"Scripts/cntools-changelog/#fixed_60","title":"Fixed","text":"
        • Version fix to include patch version
        "},{"location":"Scripts/cntools-changelog/#200-2020-07-12","title":"[2.0.0] - 2020-07-12","text":""},{"location":"Scripts/cntools-changelog/#added_17","title":"Added","text":"
        • Support for cardano-node 1.15.x
        • calculate-min-fee update to reflect change in 1.15. change was required to support byron witnesses.
        • gettip update as output is now json formatted
        • bech32 addressing in 1.15 required changes to delegator lookup in Pool >> Show
        • add --cardano-mode to query parameters
        • --mainnet flag for address generation
        • Output verbosity A new config parameter added for output verbosity using say function. 0 = Minimal - Show relevant information (default) 1 = Normal - More information about whats going on behind the scene 2 = Maximal - Debug level for troubleshooting
        • Improve delegators list in Pool >> Show
        • Identify owners delegations
        • Display owner stake in red if (stake + reward) is below pledge (single-owner only for now)
        • Display all lovelace values in floating point ADA with 6 decimals (lovelaces) using locales
        • Block Collector summary view
        • KES rotation notification/warning on startup and in pool list/show views
        • Live stake and delegators in Pool >> Show
        • Changelog
        "},{"location":"Scripts/cntools-changelog/#changed_44","title":"Changed","text":"
        • op-cert creation moved from Pool >> New to Pool >> Register.
        • Output changed in various places throughout.
        • Include reward in delegators stake.
        • Release now include patch version in addition to major and minor version. In-app update modified to reflect this change.
        • Block Collector table view
        • Various minor code improvements
        "},{"location":"Scripts/cntools-changelog/#removed_2","title":"Removed","text":"
        • Enterprise wallet upgrade option in Wallet >> List
        • Not a registered wallet on chain information from Wallet listing
        • en_US.UTF-8 locale dependency
        "},{"location":"Scripts/cntools-changelog/#fixed_61","title":"Fixed","text":"
        • meta_json_url check
        • Invalid tx_in when registering stake wallet
        • Delegators rewards in Pool >> Show
        • Work-around awk versions that only support 32-bit integers
        • Sometimes cardano-node log contain duplicate traces for the same slot at log file rollover, now filtered
        • Correct nwmagic - was hardcoded to 42
        • Set script locale to fix format issue
        "},{"location":"Scripts/cntools-changelog/#100-2020-07-07","title":"[1.0.0] - 2020-07-07","text":"
        • First official major release
        "},{"location":"Scripts/cntools-common/","title":"Common Tasks","text":"

        Important

        Familiarize yourself with the Online workflow of creating wallets and pools on the Preview/Preprod/Guild network first. You can then move on to test the Offline Workflow. The Offline workflow means that the private keys never touch the Online node. When comfortable with both the online and offline CNTools workflow, it's time to deploy what you learnt on the mainnet.

        This chapter describes some common use-cases for wallet and pool creation when running CNTools in Online mode. CNTools contains much more functionality not described here.

        Create Wallet

        A wallet is needed for pledge and to pay for pool registration fee.

        1. Choose [w] Wallet and you will be presented with the following menu:
          ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> WALLET\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Wallet Management\n\n ) New         - create a new wallet\n ) Import      - import a Daedalus/Yoroi 24/25 mnemonic or Ledger/Trezor HW wallet\n ) Register    - register a wallet on chain\n ) De-Register - De-Register (retire) a registered wallet\n ) List        - list all available wallets in a compact view\n ) Show        - show detailed view of a specific wallet\n ) Remove      - remove a wallet\n ) Decrypt     - remove write protection and decrypt wallet\n ) Encrypt     - encrypt wallet keys and make all files immutable\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Select Wallet Operation\n\n  [n] New\n  [i] Import\n  [r] Register\n  [z] De-Register\n  [l] List\n  [s] Show\n  [x] Remove\n  [d] Decrypt\n  [e] Encrypt\n  [h] Home\n
        2. Choose [n] New to create a new wallet. [i] Import can also be used to import a Daedalus/Yoroi based 15 or 24 word wallet seed
        3. Give the wallet a name
        4. CNTools will give you the wallet address. For example:
          ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> WALLET >> NEW\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nName of new wallet: Test\n\nNew Wallet         : Test\nAddress            : addr_test1qpq5qjr774cyc6kxcwp060k4t4hwp42q43v35lmcg3gcycu5uwdwld5yr8m8fgn7su955zf5qahtrgljqfjfa4nr8jfsj4alxk\nEnterprise Address : addr_test1vpq5qjr774cyc6kxcwp060k4t4hwp42q43v35lmcg3gcyccuxhdka\n\nYou can now send and receive Ada using the above addresses.\nNote that Enterprise Address will not take part in staking.\nWallet will be automatically registered on chain if you\nchoose to delegate or pledge wallet when registering a stake pool.\n
        5. Send some money to this wallet. Either through the faucet or have a friend send you some.
        • The wallet must have funds in it before you can proceed.
        • The Wallet created from here is not derived from mnemonics, please use next tab if you'd like to use a wallet that can also be accessed from Daedalus/Yoroi
        Import Daedalus/Yoroi/HW Wallet

        The Import feature of CNTools is originally based on this guide from Ilap.

        If you would like to use Import function to import a Daedalus/Yoroi based 15 or 24 word wallet seed, please ensure that cardano-address and bech32 bineries are available in your $PATH environment variable:

        bech32 --version\n1.1.0\n\ncardano-address --version\n3.5.0\n

        If the version is not as per above, please run the latest guild-deploy.sh from here and rebuild cardano-node as instructed here.

        To import a Daedalus/Yoroi wallet to CNTools, open CNTools and select the [w] Wallet option, and then select the [i] Import, the following menu will appear:

        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> WALLET >> IMPORT\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Wallet Import\n\n ) Mnemonic  - Daedalus/Yoroi 24 or 25 word mnemonic\n ) HW Wallet - Ledger/Trezor hardware wallet\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Select Wallet operation\n\n  [m] Mnemonic\n  [w] HW Wallet\n  [h] Home\n

        Note

        You can import Hardware wallet using [w] HW Wallet above, but please note that before you are able to use hardware wallet in CNTools, you need to ensure you can detect your hardware device at OS level using cardano-hw-cli

        Select the wallet you want to import, for Daedalus / Yoroi wallets select [m] Mnemonic:

        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> WALLET >> IMPORT >> MNEMONIC\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nName of imported wallet: TEST\n\n24 or 15 word mnemonic(space separated):\n
        Give your wallet a name (in this case 'TEST'), and enter your mnemonic phrase. Please ensure that you **READ* through the complete notes presented by CNTools before proceeding.

        Create Pool

        Create the necessary pool keys.

        1. From the main menu select [p] Pool
          ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> POOL\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Pool Management\n\n ) New      - create a new pool\n ) Register - register created pool on chain using a stake wallet (pledge wallet)\n ) Modify   - change pool parameters and register updated pool values on chain\n ) Retire   - de-register stake pool from chain in specified epoch\n ) List     - a compact list view of available local pools\n ) Show     - detailed view of specified pool\n ) Rotate   - rotate pool KES keys\n ) Decrypt  - remove write protection and decrypt pool\n ) Encrypt  - encrypt pool cold keys and make all files immutable\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Select Pool Operation\n\n  [n] New\n  [r] Register\n  [m] Modify\n  [x] Retire\n  [l] List\n  [s] Show\n  [o] Rotate\n  [d] Decrypt\n  [e] Encrypt\n  [h] Home\n
        2. Select [n] New to create a new pool
        3. Give the pool a name. In our case, we call it TEST. The result should look something like this:
          ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> POOL >> NEW\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nPool Name: TEST\n\nPool: TEST\nID (hex)    : 8d5a3510f18ce241115da38a1b2419ed82d308599c16e98caea1b4c0\nID (bech32) : pool134dr2y833n3yzy2a5w9pkfqeakpdxzzenstwnr9w5x6vqtnclue\n
        Register Pool

        Register the pool on-chain.

        1. From the main menu select [p] Pool
        2. Select [r] Register
        3. Select the pool you just created
        4. CNTools will give you prompts to set pledge, margin, cost, metadata, and relays. Enter values that are useful to you.

        Make sure you set your pledge low enough to insure your funds in your wallet will cover pledge plus pool registration fees.

        1. Select wallet to use as pledge wallet, Test in our case. As this is a newly created wallet, you will be prompted to continue with wallet registration. When complete and if successful, both wallet and pool will be registered on-chain.

        It will look something like this:

        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> POOL >> REGISTER\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nOnline mode  -  The default mode to use if all keys are available\n\nHybrid mode  -  1) Go through the steps to build a transaction file\n                2) Copy the built tx file to an offline node\n                3) Sign it using 'Sign Tx' with keys on offline node\n                   (CNTools started in offline mode '-o' without node connection)\n                4) Copy the signed tx file back to the online node and submit using 'Submit Tx'\n\nSelected value: [o] Online\n\n# Select pool\nSelected pool: TEST\n\n# Pool Parameters\npress enter to use default value\n\nPledge (in Ada, default: 50,000):\nMargin (in %, default: 7.5):\nCost (in Ada, minimum: 340, default: 340):\n\n# Pool Metadata\n\nEnter Pool's JSON URL to host metadata file - URL length should be less than 64 chars (default: https://foo.bat/poolmeta.json):\n\nEnter Pool's Name (default: TEST):\nEnter Pool's Ticker , should be between 3-5 characters (default: TEST):\nEnter Pool's Description (default: No Description):\nEnter Pool's Homepage (default: https://foo.com):\n\nOptionally set an extended metadata URL?\nSelected value: [n] No\n{\n  \"name\": \"TEST\",\n  \"ticker\": \"TEST\",\n  \"description\": \"No Description\",\n  \"homepage\": \"https://foo.com\",\n  \"nonce\": \"1613146429\"\n}\n\nPlease host file /opt/cardano/guild/priv/pool/TEST/poolmeta.json as-is at https://foo.bat/poolmeta.json\n\n# Pool Relay Registration\nSelected value: [d] A or AAAA DNS record (single)\nEnter relays's DNS record, only A or AAAA DNS records: relay.foo.com\nEnter relays's port: 6000\nAdd more relay entries?\nSelected value: [n] No\n\n# Select main owner/pledge wallet (normal CLI wallet)\nSelected wallet: Test (100,000.000000 Ada)\nWallet Test3 not registered on chain\n\nWaiting for new block to be created (timeout = 600 slots, 600s)\nINFO: press any key to cancel and return (won't stop transaction)\n\nOwner #1 : Test added!\n\nRegister a multi-owner pool (you need to have stake.vkey of any additional owner in a seperate wallet folder under $CNODE_HOME/priv/wallet)?\nSelected value: [n] No\n\nUse a separate rewards wallet from main owner?\nSelected value: [n] No\n\nWaiting for new block to be created (timeout = 600 slots, 600s)\nINFO: press any key to cancel and return (won't stop transaction)\n\nPool TEST successfully registered!\nOwner #1      : Test\nReward Wallet : Test\nPledge        : 50,000 Ada\nMargin        : 7.5 %\nCost          : 340 Ada\n\nUncomment and set value for POOL_NAME in ./env with 'TEST'\n\nINFO: Total balance in 1 owner/pledge wallet(s) are: 99,497.996518 Ada\n

        1. As mentioned in the above output: Uncomment and set value for POOL_NAME in ./env with 'TEST' (in our case, the POOL_NAME is TEST). The cnode.sh script will automatically detect whether the files required to run as a block producing node are present in the $CNODE_HOME/priv/pool/<POOL_NAME> directory.
        Rotate KES Keys

        The node runs with an operational certificate, generated using the KES hot key. For security reasons, the protocol asks to re-generate (or rotate) your KES key once reaching expiry. On mainnet, this expiry is in 62 cycles of 18 hours (thus, to ask for rotation quarterly), after which your node will not be able to forge valid blocks unless rotated. To be able to rotate KES keys, your cold keys files (cold.skey, cold.vkey and cold.counter) need to be present on the machine where you run CNTools to rotate your KES key.

        1. To Rotate KES keys and generate the operational certificate - op.cert.

        2. From the main menu select [p] Pool

        3. Select [o] Rotate
        4. Select the pool you just created

        The output should look like:

        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> POOL >> ROTATE KES\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nSelect pool to rotate KES keys on\nSelected pool: TEST\n\nPool KES keys successfully updated\nNew KES start period  : 240\nKES keys will expire  : 302 - 2021-09-04 11:24:31 UTC\n\nRestart your pool node for changes to take effect\n\npress any key to return to home menu\n
        1. Start or restart your cardano-node. If deployed as a systemd service as shown here, you can run sudo systemctl restart cnode.
        2. Ensure the node is running as a block producing (core) node.

        You can use gLiveView - the output at the top should say > Cardano Node - (Core - Guild).

        Alternatively, you can check the node logs in $CNODE_HOME/logs/ to see whether the node is performing leadership checks (TraceStartLeadershipCheck, TraceNodeIsNotLeader, etc.)

        "},{"location":"Scripts/cntools/","title":"Overview","text":"

        Important

        • Ensure the Pre-Requisites are in place before you proceed.
        • The active testers for this script use Fedora/CentOS/RHEL/Ubuntu operating systems, other OS may require customisations.
        • The tool uses the folder structure defined here. Everyone is free to customise, but while doing so beware that you may introduce changes that may not be tested during updates.
        • Always use Preview/Preprod/Guild network first to familiarise, read the warning/messages in full, maintain your keys/backups with passwords (no one other than yourself can retrieve the funds if you make an accident), before performing actions on mainnet.

        Koios CNTools is like a swiss army knife for pool operators to simplify typical operations regarding their wallet keys and pool management. Please note that this tool only aims to simplify usual tasks for its users, but it should NOT act as an excuse to skip understanding how to manually work through things or basics of Linux operations. The skills highlighted on the home page are paramount for a stake pool operator, and so is the understanding of configuration files and network. Please ensure you've read and understood the disclaimers before proceeding.

        Visit the Changelog section to see progress and current release.

        "},{"location":"Scripts/cntools/#overview","title":"Overview","text":"

        The tool consist of three files.

        • cntools.sh - the main script to launch cntools.
        • cntools.library - internal script with helper functions.

        In addition to the above files, there is also a dependency on the common env file. CNTools connects to your node through the configuration in the env file located in the same directory as the script. Customize env and cntools.sh files to your needs.

        Additionally, CNTools can integrate and enable optional functionalities based on external components:

        • cncli.sh is a companion script with optional functionalities to run on the core node (block producer) such as monitoring created blocks, calculating leader schedules and block validation.
        • logMonitor.sh is another companion script meant to be run together with the cncli.sh script to give a more complete picture.

        See CNCLI and Log Monitor sections for more details.

        Koios CNTools can operate in following modes:

        • Advanced - When CNTools is launched with -a runtime argument, this launches CNTools exposing a new Advanced menu, which allows users to manage (create/mint/burn) new assets.
        • Online - When all wallet and pool keys are available on the hot node, use this option. This is the default mode when you start CNTools without parameters.
        • Hybrid - When running in online mode, this option can be used in menus to create offline transaction files that can be passed to Offline CNTools to sign.
        • Offline - When CNTools is launched with -o runtime argument, this launches CNTools with limited set of features. This mode does not require access to cardano-node. It is mainly used to create Wallet/Pool and access Transaction >> Sign to sign an offline transaction file created in Hybrid mode.
        "},{"location":"Scripts/cntools/#download-and-update","title":"Download and Update","text":"

        The update functionality is provided from within CNTools. In case of breaking changes, please follow the prompts post-upgrade. If stuck, it's always best to re-run the latest guild-deploy.sh before proceeding.

        If you have not updated in a while, it is possible that you might come from a release with breaking changes. If so, please be sure to check out the upgrade instructions.

        "},{"location":"Scripts/cntools/#navigation","title":"Navigation","text":"

        The scripts menu supports both arrow key navigation and shortcut key selection. The character within the square brackets is the shortcut to press for quick navigation. For other selections like wallet and pool menu that don't contain shortcuts, there is a third way to navigate. Key pressed is compared to the first character of the menu option and if there is a match the selection jumps to this location. A handy way to quickly navigate a large menu.

        "},{"location":"Scripts/cntools/#hardware-wallet","title":"Hardware Wallet","text":"

        CNTools includes hardware wallet support since version 7.0.0 through Vacuumlabs cardano-hw-cli application. Initialize and update firmware/app on the device to the latest version before usage following the manufacturer instructions.

        To enable hardware support run guild-deploy.sh -s w. This downloads and installs Vacuumlabs cardano-hw-cli including udev configuration. When a new version of Vacuumlabs cardano-hw-cli is released, run guild-deploy.sh -s w again to update. For additional runtime options, run guild-deploy.sh -h.

        Ledger
        • Supported devices: Nano S / Nano X
        • Make sure the latest cardano app is installed on the device.
        Trezor
        • Supported devices: Model T
        • Make sure the latest firmware is installed on the device. In addition to this, install Trezor Bridge for your system before trying to use your Trezor device in CNTools. You can find the latest version of the bridge at https://wallet.trezor.io/#/bridge
        "},{"location":"Scripts/cntools/#offline-workflow","title":"Offline Workflow","text":"

        CNTools can be run in online and offline mode. At a very high level, for working with offline devices, remember that you need to use CNTools in an online node to generate a staging transaction for the desired type of transaction, and then move the staging transaction to an offline node to sign (authorize) using the signing keys on your offline node - and then bring back the signed transaction to the online node for submission to the chain.

        For the offline workflow, all the wallet and pool keys should be kept on the offline node. The backup function in CNTools has an option to create a backup without private keys (sensitive signing keys) to be transferred to online node. All other files are included in the backup to be transferred to the online node.

        Keys excluded from backup when created without private keys: Wallet - payment.skey, stake.skey Pool - cold.skey

        Note that setting up an offline server requires good SysOps background (you need to be aware of how to set up your server with offline mirror repository, how to transfer files across and be fairly familiar with the disk layout presented in the documentation). The guild-deploy.sh in its current state is not expected to run on an offline machine. Essentially, you simply need the cardano-cli, bech32, cardano-address binaries in your $PATH, OS level dependency packages [jq, coreutils, pkgconfig, gcc-c++ and bc ], and perhaps a copy from your online cnode directory (to ensure you have the right genesis/config files on your offline server). We strongly recommend you to familiarise yourself with the workflow on the preview / preprod / guild networks first, before attempting on mainnet.

        Example workflow for creating a wallet and pool:

        sequenceDiagram Note over Offline: Create/Import a wallet Note over Offline: Create a new pool Note over Offline: Rotate KES keys to generate op.cert Note over Offline: Create a backup w/o private keys Offline->>Online: Transfer backup to online node Note over Online: Fund the wallet base address with enough Ada Note over Online: Register wallet using ' Wallet \u00bb Register ' in hybrid mode Online->>Offline: Transfer built tx file back to offline node Note over Offline: Use ' Transaction >> Sign ' with payment.skey from wallet to sign transaction Offline->>Online: Transfer signed tx back to online node Note over Online: Use ' Transaction >> Submit ' to send signed transaction to blockchain Note over Online: Register pool in hybrid mode loop Offline-->Online: Repeat steps to sign and submit built pool registration transaction end Note over Online: Verify that pool was successfully registered with ' Pool \u00bb Show ' Online mode

        To start CNTools in Online (advanced) Mode, execute the script from the $CNODE_HOME/scripts/ directory:

        cd $CNODE_HOME/scripts\n./cntools.sh -a\n

        You should get a screen that looks something like this:

        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> Koios CNTools vX.X.X - Guild - CONNECTED <<\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Main Menu    Telegram Announcement / Support channel: t.me/CardanoKoios/9759\n\n ) Wallet      - create, show, remove and protect wallets\n ) Funds       - send, withdraw and delegate\n ) Pool        - pool creation and management\n ) Transaction - Sign and Submit a cold transaction (hybrid/offline mode)\n ) Blocks      - show core node leader schedule & block production statistics\n ) Backup      - backup & restore of wallet/pool/config\n ) Advanced    - Developer and advanced features: metadata, multi-assets, ...\n ) Refresh     - reload home screen content\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n                                                  Epoch 276 - 3d 19:08:27 until next\n What would you like to do?                                         Node Sync: 12 :)\n\n  [w] Wallet\n  [f] Funds\n  [p] Pool\n  [t] Transaction\n  [b] Blocks\n  [u] Update\n  [z] Backup & Restore\n  [a] Advanced\n  [r] Refresh\n  [q] Quit\n
        Offline mode

        To start CNTools in Offline Mode, execute the script from the $CNODE_HOME/scripts/ directory using the -o flag:

        cd $CNODE_HOME/scripts\n./cntools.sh -o\n

        The main menu header should let you know that node is started in offline mode:

        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n >> Koios CNTools vX.X.X - Guild - OFFLINE <<\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Main Menu    Telegram Announcement / Support channel: t.me/CardanoKoios/9759\n\n ) Wallet      - create, show, remove and protect wallets\n ) Funds       - send, withdraw and delegate\n ) Pool        - pool creation and management\n ) Transaction - Sign and Submit a cold transaction (hybrid/offline mode)\n\n ) Backup      - backup & restore of wallet/pool/config\n\n ) Refresh     - reload home screen content\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n                                                  Epoch 276 - 3d 19:03:46 until next\n What would you like to do?\n\n  [w] Wallet\n  [f] Funds\n  [p] Pool\n  [t] Transaction\n  [z] Backup & Restore\n  [r] Refresh\n  [q] Quit\n

        "},{"location":"Scripts/env/","title":"Common env","text":"

        A common environment file called env is sourced by most scripts in the Guild Operators repository. This file holds common variables and functions needed by other scripts. There are several benefits to this, not having to specify duplicate settings and being able to reuse functions decreasing the risk of misconfiguration and inconsistency.

        "},{"location":"Scripts/env/#installation","title":"Installation","text":"

        env file is downloaded together with the rest of the scripts when Pre-Requisites if followed and located in the $CNODE_HOME/scripts/ directory. The file is also automatically downloaded/updated by some of the individual scripts if missing, like cntools.sh, gLiveView.sh and topologyUpdater.sh. All custom changes in User Variables section are untouched on updates unless a forced overwrite is selected when running guild-deploy.sh.

        "},{"location":"Scripts/env/#configuration","title":"Configuration","text":"

        Most variables can be left commented to use the automatically detected or default value. But there are some that need to be set as explained below.

        • CNODE_PORT - This is the most important variable and needs to be set. Used when launching the node through cnode.sh and to identify the correct process of the node.
        • CNODE_HOME - The root directory of the Cardano node holding all the files needed. Can be left commented if guild-deploy.sh has been run as this variable is then exported and added as a system environment variable.
        • POOL_NAME - If the node is to be started as a block producer by cnode.sh this variable needs to be uncommented and set. This is the name given to the pool in CNTools (not ticker), i.e. the pool directory name under $CNODE_HOME/priv/pool/<POOL_NAME>

        Take your time and look through the different variables and their explanations and decide if you need/want to change the default setting. For a default deployment using guild-deploy.sh, the CNODE_PORT (all installs) and POOL_NAME (only block producer) should be the only variables needed to be set.

        ######################################\n# User Variables - Change as desired #\n# Leave as is if unsure              #\n######################################\n\n#CCLI=\"${HOME}/.local/bin/cardano-cli\"                  # Override automatic detection of path to cardano-cli executable\n#CNCLI=\"${HOME}/.local/bin/cncli\"                       # Override automatic detection of path to cncli executable (https://github.com/AndrewWestberg/cncli)\n#CNODE_HOME=\"/opt/cardano/cnode\"                        # Override default CNODE_HOME path (defaults to /opt/cardano/cnode)\nCNODE_PORT=6000                                         # Set node port\n#CONFIG=\"${CNODE_HOME}/files/config.json\"               # Override automatic detection of node config path\n#SOCKET=\"${CNODE_HOME}/sockets/node0.socket\"            # Override automatic detection of path to socket\n#TOPOLOGY=\"${CNODE_HOME}/files/topology.json\"           # Override default topology.json path\n#LOG_DIR=\"${CNODE_HOME}/logs\"                           # Folder where your logs will be sent to (must pre-exist)\n#DB_DIR=\"${CNODE_HOME}/db\"                              # Folder to store the cardano-node blockchain db\n#UPDATE_CHECK=\"Y\"                                       # Check for updates to scripts, it will still be prompted before proceeding (Y|N).\n#TMP_DIR=\"/tmp/cnode\"                                   # Folder to hold temporary files in the various scripts, each script might create additional subfolders\n#EKG_HOST=127.0.0.1                                     # Set node EKG host IP\n#EKG_PORT=12788                                         # Override automatic detection of node EKG port\n#PROM_HOST=127.0.0.1                                    # Set node Prometheus host IP\n#PROM_PORT=12798                                        # Override automatic detection of node Prometheus port\n#EKG_TIMEOUT=3                                          # Maximum time in seconds that you allow EKG request to take before aborting (node metrics)\n#CURL_TIMEOUT=10                                        # Maximum time in seconds that you allow curl file download to take before aborting (GitHub update process)\n#BLOCKLOG_DIR=\"${CNODE_HOME}/guild-db/blocklog\"         # Override default directory used to store block data for core node\n#BLOCKLOG_TZ=\"UTC\"                                      # TimeZone to use when displaying blocklog - https://en.wikipedia.org/wiki/List_of_tz_database_time_zones\n#SHELLEY_TRANS_EPOCH=208                                # Override automatic detection of shelley epoch start, e.g 208 for mainnet\n#TG_BOT_TOKEN=\"\"                                        # Uncomment and set to enable telegramSend function. To create your own BOT-token and Chat-Id follow guide at:\n#TG_CHAT_ID=\"\"                                          # https://cardano-community.github.io/guild-operators/Scripts/sendalerts\n#USE_EKG=\"N\"                                            # Use EKG metrics from the node instead of Promethus. Promethus metrics(default) should yield slightly better performance\n#TIMEOUT_LEDGER_STATE=300                               # Timeout in seconds for querying and dumping ledger-state\n#IP_VERSION=4                                           # The IP version to use for push and fetch, valid options: 4 | 6 | mix (Default: 4)\n\n#WALLET_FOLDER=\"${CNODE_HOME}/priv/wallet\"              # Root folder for Wallets\n#POOL_FOLDER=\"${CNODE_HOME}/priv/pool\"                  # Root folder for Pools\n# Each wallet and pool has a friendly name and subfolder containing all related keys, certificates, ...\n#POOL_NAME=\"\"                                           # Set the pool's name to run node as a core node (the name, NOT the ticker, ie folder name)\n\n#WALLET_PAY_VK_FILENAME=\"payment.vkey\"                  # Standardized names for all wallet related files\n#WALLET_PAY_SK_FILENAME=\"payment.skey\"\n#WALLET_HW_PAY_SK_FILENAME=\"payment.hwsfile\"\n#WALLET_PAY_ADDR_FILENAME=\"payment.addr\"\n#WALLET_BASE_ADDR_FILENAME=\"base.addr\"\n#WALLET_STAKE_VK_FILENAME=\"stake.vkey\"\n#WALLET_STAKE_SK_FILENAME=\"stake.skey\"\n#WALLET_HW_STAKE_SK_FILENAME=\"stake.hwsfile\"\n#WALLET_STAKE_ADDR_FILENAME=\"reward.addr\"\n#WALLET_STAKE_CERT_FILENAME=\"stake.cert\"\n#WALLET_STAKE_DEREG_FILENAME=\"stake.dereg\"\n#WALLET_DELEGCERT_FILENAME=\"delegation.cert\"\n\n#POOL_ID_FILENAME=\"pool.id\"                             # Standardized names for all pool related files\n#POOL_HOTKEY_VK_FILENAME=\"hot.vkey\"\n#POOL_HOTKEY_SK_FILENAME=\"hot.skey\"\n#POOL_COLDKEY_VK_FILENAME=\"cold.vkey\"\n#POOL_COLDKEY_SK_FILENAME=\"cold.skey\"\n#POOL_OPCERT_COUNTER_FILENAME=\"cold.counter\"\n#POOL_OPCERT_FILENAME=\"op.cert\"\n#POOL_VRF_VK_FILENAME=\"vrf.vkey\"\n#POOL_VRF_SK_FILENAME=\"vrf.skey\"\n#POOL_CONFIG_FILENAME=\"pool.config\"\n#POOL_REGCERT_FILENAME=\"pool.cert\"\n#POOL_CURRENT_KES_START=\"kes.start\"\n#POOL_DEREGCERT_FILENAME=\"pool.dereg\"\n\n#ASSET_FOLDER=\"${CNODE_HOME}/priv/asset\"                # Root folder for Multi-Assets containing minted assets and subfolders for Policy IDs\n#ASSET_POLICY_VK_FILENAME=\"policy.vkey\"                 # Standardized names for all multi-asset related files\n#ASSET_POLICY_SK_FILENAME=\"policy.skey\"\n#ASSET_POLICY_SCRIPT_FILENAME=\"policy.script\"           # File extension '.script' mandatory\n#ASSET_POLICY_ID_FILENAME=\"policy.id\"\n
        "},{"location":"Scripts/gliveview/","title":"gLiveView","text":"

        Reminder !!

        Ensure the Pre-Requisites are in place before you proceed.

        Koios gLiveView is a local monitoring tool to use in addition to remote monitoring tools like Prometheus/Grafana, Zabbix or IOG's RTView. This is especially useful when moving to a systemd deployment - if you haven't done so already - as it offers an intuitive UI to monitor the node status.

        "},{"location":"Scripts/gliveview/#configuration-startup","title":"Configuration & Startup","text":"

        For most setups, it's enough to set CNODE_PORT in the env file. The rest of the variables should automatically be detected. If required, modify User Variables in env and gLiveView.sh to suit your environment (if the environment is customised). This should lead you to a stage where you can now start running ./gLiveView.sh in the folder you downloaded the script (the default location would be $CNODE_HOME/scripts). Note that the script is smart enough to automatically detect when you're running as a Core or Relay and will show fields accordingly.

        The tool can be run in legacy mode with only standard ASCII characters for terminals with trouble displaying the box-drawing characters. Run ./gLiveView.sh -h to show available command-line parameters or permanently set it directly in script.

        A sample output from both core and relay together with peer analysis:

        Core

        Relay

        Peer Analysis

        "},{"location":"Scripts/gliveview/#upper-main-section","title":"Upper main section","text":"

        Displays live metrics from cardano-node gathered through the nodes EKG/Prometheus(env setting) endpoint.

        • Epoch Progress - Epoch number and progress is live from the node while date calculation until epoch boundary is based on offline genesis parameters.
        • Block - The nodes current block height since genesis start.
        • Slot - The nodes current slot height since current epoch start.
        • Density - With the current chain parameters(MainNet), a block is created roughly every 20 seconds(activeSlotsCoeff). A slot on MainNet happens every 1 second(slotLength), thus the max chain density can be calculated as slotLength/activeSlotsCoeff = 5%. Normally, the value should fluctuate around this value.
        • Total Tx - The total number of transactions processed since node start.
        • Pending Tx - The number of transactions and the bytes(total, in kb) currently in mempool to be included in upcoming blocks.
        • Tip (ref) - Reference tip is an offline calculation based on genesis values for current slot height since genesis start.
        • Tip (diff) / Status - Will either show node status as starting|sync xx.x% or if close to reference tip, the tip difference Tip (ref) - Tip (node) to see how far of the tip (diff value) the node is. With current parameters a slot diff up to 40 from reference tip is considered good but it should usually stay below 30. It's perfectly normal to see big differences in slots between blocks. It's the built in randomness at play. To see if a node is really healthy and staying on tip you would need to compare the tip between multiple nodes.
        • Forks - The number of forks since node start. Each fork means the blockchain evolved in a different direction, thereby discarding blocks. A high number of forks means there is a higher chance of orphaned blocks.
        • Peers In / Out - Shows how many connections the node has established in and out. See Peer analysis section for how to get more details of incoming and outgoing connections.
        • P2P Mode
        • Cold peers indicate the number of inactive but known peers to the node.
        • Warm peers tell how many established connections the node has.
        • Hot peers how many established connections are actually active.
        • Bi-Dir(bidirectional) and Uni-Dir(unidirectional) indicate how the handshake protocol negotiated the connection. The connection between p2p nodes will always be bidirectional, but it will be unidirectional between p2p nodes and non-p2p nodes.
        • Duplex shows the connections that are actually used in both directions, only bidirectional connections have this potential.
        • Mem (RSS) - RSS is the Resident Set Size and shows how much memory is allocated to cardano-node and that is in RAM. It does not include memory that is swapped out. It does include memory from shared libraries as long as the pages from those libraries are actually in memory. It does include all stack and heap memory.
        • Mem (Live) / (Heap) - GC (Garbage Collector) values that show how much memory is used for live/heap data. A large difference between them (or the heap approaching the physical memory limit) means the node is struggling with the garbage collector and/or may begin swapping.
        • GC Minor / Major - Collecting garbage from \"Young space\" is called a Minor GC. Major (Full) GC is done more rarily and is a more expensive operation. Explaining garbage collection is a topic outside the scope of this documentation and google is your friend for this.
        • Block propagation - Last Block measures the duration between when the last block was scheduled to be produced and when the node learned about it. Late blocks are blocks whose delay is larger than 5s. If the node is not synching, the number of late blocks needs to stay low. Within \u2153/5s estimates the chance of observing a delay of \u2153/5s (based on the delays observed for previous blocks). A healthy node needs to stay above 95% of blocks within 3s. Finally, served blocks counts how many blocks were fetched by \"in\" peers. If this does not increase for a long time, it means the \"in\" peers are learning about new blocks from somewhere else (and therefore this node is not contributing towards accelerating the propagation). Overall, these metrics are helpful in tweaking the topology and/or performance of the network links.
        "},{"location":"Scripts/gliveview/#core-section","title":"Core section","text":"

        If the node is run as a core, identified by the 'forge-about-to-lead' parameter, a second core section is displayed.

        • KES period / expiration - This section contain the current and remaining KES periods as well as a calculated date for the expiration. When getting close to expire date the values will change color.
        • Missed slot checks - A value that show if the node have missed slots for attempting leadership checks (as absolute value and percentage since node startup). !!! info \"Missed Slot Leadership Check\"

          Note that while this counter should ideally be close to zero, you would often see a higher value if the node is busy (e.g. paused for garbage collection or busy with reward calculations). A consistently high percentage of missed slots would need further investigation (assistance for troubleshooting can be seeked here ), as in extremely remote cases - it can overlap with a slot that your node could be a leader for.

        • Blocks - If CNCLI is activated to store blocks created in a blocklog DB, data from this blocklog is displayed. See linked CNCLI documentation for details regarding the different block metrics. If CNCLI is not deployed, block metrics displayed are taken from node metrics and show blocks created by the node since node start.

        "},{"location":"Scripts/gliveview/#peer-analysis","title":"Peer analysis","text":"

        A manual peer analysis can be triggered by key press p. A latency test will be done on incoming and outgoing connections to the node.

        Note

        Note that with P2P enabled, an incoming/outgoing connection can be reused for bi-directional traffic. There isnt a way to distinctly identify the P2P peer's direction yet for a given IP.

        Outgoing connections(peers in topology file), ping type used is done in this order: 1. cncli - If available, this gives the most accurate measure as it checks the entire handshake process against the remote peer. 2. ss - Sends a TCP SYN package to ping the remote peer on the cardano-node port. Should give ~100% success rate. 2. tcptraceroute - Same as ss. 3. ping - fallback method using ICMP ping against IP. Will only work if firewall of remote peer accept ICMP traffic.

        For incoming connections, only ICMP ping is used as remote peer port is unknown. It's not uncommon to see many undetermined peers for incoming connections as it's a good security practice to disable ICMP in firewall.

        Once the analysis is finished, it will display the RTTs (return-trip times) for the peers and group them in ranges 0-50, 50-100, 100-200, 200<. The analysis is NOT live. Press [h] Home to go back to default view or [i] Info to show in-script help text. Up and Down arrow keys is used to select incoming or outgoing detailed list of IPs and their RTT value. Left (<) and Right (>) arrow keys can be used to navigate the pages in the selected list.

        "},{"location":"Scripts/gliveview/#troubleshootingcustomisations","title":"Troubleshooting/Customisations","text":"

        In case you run into trouble while running the script, you might want to edit env & gLiveView.sh and look at User Variables section. You can override the values if the automatic detection do not provide the right information, but we would appreciate if you could also notify us by raising an issue against the GitHub repository:

        gLiveView.sh

        ######################################\n# User Variables - Change as desired #\n######################################\n\nNODE_NAME=\"Cardano Node\"                  # Change your node's name prefix here, keep at or below 19 characters!\nREFRESH_RATE=2                            # How often (in seconds) to refresh the view (additional time for processing and output may slow it down)\nLEGACY_MODE=false                         # (true|false) If enabled unicode box-drawing characters will be replaced by standard ASCII characters\nRETRIES=3                                 # How many attempts to connect to running Cardano node before erroring out and quitting\nPEER_LIST_CNT=6                           # Number of peers to show on each in/out page in peer analysis view\nTHEME=\"dark\"                              # dark  = suited for terminals with a dark background\n# light = suited for terminals with a bright background\nENABLE_IP_GEOLOCATION=\"Y\"                 # Enable IP geolocation on outgoing and incoming connections using ip-api.com\n

        "},{"location":"Scripts/itnrewards/","title":"Itnrewards","text":""},{"location":"Scripts/itnrewards/#concept","title":"Concept","text":"

        To claim rewards earned during the Incentivized TestNet the private and public keys from ITN must be converted to Shelley stake keys. A script called itnRewards.sh has been created to guide you through the process of converting the keys and to create a CNTools compatible wallet from were the rewards can be withdrawn.

        graph TB A([\"itnRewards.sh\"]) A --x B([\"ITN Owner skey (ed25519[e]_sk)..\"]) --x D([\"cardano-cli shelley key convert-itn-key ..\"]) A --x C([\"ITN Owner vkey (ed25519_pk)..\"]) --x D D --x E([\"Stake skey/vkey\"]) --x L A --x F([\"cardano-cli shelley ..\"]) F --x G([\"Payment skey/vkey/addr\"]) --x L F --x H([\"Reward addr\"]) --x L F --x I([\"Base addr\"]) --x L L[CNTools Wallet] ;"},{"location":"Scripts/itnrewards/#steps","title":"Steps","text":"
        • If the secret key used for jcli account in ITN was ed25519_sk (not extended), you can run the itnRewards.sh script providing the name for the CNTools wallet and ITN owner public/secret keys that were used to register your pool as below.
          cd $CNODE_HOME/scripts\n./itnRewards.sh MyITNWallet ~/jormu/account/priv/owner.sk ~/jormu/account/priv/owner.pk\n
        • Start CNTools and verify that the correct balance is shown in the wallet reward address
        • Fund base address of the wallet with enough funds to pay the withdraw tx fee
        • Use FUNDS >> WITHDRAW to move rewards to the base address of wallet
        • You can now spend/move funds as you see fit
        "},{"location":"Scripts/itnwitness/","title":"Itnwitness","text":"

        Disclaimer

        Currently this is to protect the existing pools from the ITN who already have a delegator base against spoofing - to avoid scammers building on results of ITN from known pools. There would be a solution in the future for Mainnet nodes too - but it doesn't apply to those in its current form.

        "},{"location":"Scripts/itnwitness/#concept","title":"Concept","text":"

        Due to the expected ticker spoofing attack for pools that were famous during ITN, some of the community members have proposed an interim solution to verify the legitimacy of a pool for delegators. You can check the high-level workflow below:

        graph TB A(\"ITN Owner skey (ed25519/ed25519e) ..\") --x C([\"jcli key sign ..\"]) B(\"Haskell Pool ID (pool.id) ..\") --x C C --x D(\"Signature key, (pool.sig) ..\") E(\"ITN Owner vkey (ed25519_pk) ..\") --x F(\"Extended Metadata JSON (poolmeta_extended.json) ..\") D --x F F --x G(\"Pool Meta JSON (poolmeta.json) ..\") ;"},{"location":"Scripts/itnwitness/#steps","title":"Steps","text":"

        The actual implementation is pretty straightforward, we will keep it brisk - as we assume ones participating are fairly familiar with jcli usage.

        • You need to use your owner keys that were used to register your pool, and it should match the owner public key you presented on official cardano-foundation github while registering metadata.
        • Store your pool ID in a file (eg: mainnet_pool.id)
        • Sign the file using your owner secret key from ITN (eg: owner_skey) as per below:
          jcli key sign --secret-key ~/jormu/account/priv/owner.sk $CNODE_HOME/priv/pool/TEST/pool.id --output mainnet_pool.sig\ncat mainnet_pool.sig\n# ed25519_sig1sn32v3z...d72rg7rc6gs\n
        • Add this signature and owner public key to the extended pool JSON , so that it looks like below:
          {\n\"itn\": {\n\"owner\": \"ed25519_pk1...\",\n\"witness\": \"ed25519_sig1...\"\n}\n}\n
        • Host this signature file online at a URL with raw contents easily accessible on the internet (eg: https://my.pool.com/extended-metadata.json)
        • When you register/modify a pool using CNTools, use the above mentioned URL to add to your pool metadata.

        If the process is approved to appear for wallets, we may consider providing easier alternatives. If any queries about the process, or any additions please create a git issue/PR against guild repository - to capture common queries and update instructions/help text where appropriate.

        "},{"location":"Scripts/itnwitness/#sample-output-of-json-files-generated","title":"Sample output of JSON files generated","text":"
        • Metadata JSON used for registering pool (one that will be hosted URL used to define pool, eg: https://hosting.site/poolmeta.json)

          {\n\"name\":\"Test\",\n\"ticker\":\"TEST\",\n\"description\":\"For demo purposes only\",\n\"homepage\":\"https://hosting.site\",\n\"nonce\":\"1595816423\",\n\"extended\":\"https://hosting.site/poolmeta_extended.json\"\n}\n

        • Extended Metadata JSON used for hosting additional metadata (hosted at URL referred in extended field above, thus - eg : https://hosting.site/poolmeta_extended.json)

        {\n\"itn\": {\n\"owner\": \"ed25519_pk1...\",\n\"witness\": \"ed25519_sig1...\"\n}\n}\n
        "},{"location":"Scripts/logmonitor/","title":"Log Monitor","text":"

        Reminder !!

        Ensure the Pre-Requisites are in place before you proceed.

        logMonitor.sh is a general purpose JSON log monitoring script for traces created by cardano-node. Currently, it looks for traces related to leader slots and block creation but other uses could be added in the future.

        "},{"location":"Scripts/logmonitor/#block-traces","title":"Block traces","text":"

        For the core node (block producer) the logMonitor.sh script can be run to monitor the JSON log file created by cardano-node for traces related to leader slots and block creation.

        For optimal coverage, it's best run together with CNCLI scripts as they provide different functionalities. Together, they create a complete picture of blocks assigned, created, validated or invalidated due to node issues.

        "},{"location":"Scripts/logmonitor/#installation","title":"Installation","text":"

        The script is best run as a background process. This can be accomplished in many ways but the preferred method is to run it as a systemd service. A terminal multiplexer like tmux or screen could also be used but not covered here.

        Use the deploy-as-systemd.sh script to create a systemd unit file (deployed together with CNCLI). Log output is handled by syslog and end up in the systems standard syslog file, normally /var/log/syslog. journalctl -f -u cnode-logmonitor.service can be used to check service output (follow mode). Other logging configurations are not covered here.

        "},{"location":"Scripts/logmonitor/#view-blocklog","title":"View Blocklog","text":"

        Best viewed in CNTools or gLiveView. See CNCLI for example output.

        "},{"location":"Scripts/sendalerts/","title":"Sendalerts","text":"

        !> Ensure the Pre-Requisites are in place before you proceed.

        This section describes the ways in which CNTools can send important messages to the operator.

        "},{"location":"Scripts/sendalerts/#telegram-alerts","title":"Telegram alerts","text":"

        If known but unwanted errors occur on your node, or if characteristic values indicate an unusual status , CNTools can send you Telegram alert messages.

        To do this, you first have to activate your own bot and link it to your own Telegram user. Here is an explanation of how this works:

        1. Open Telegram and search for \"botfather\".

        2. Write him your wish: /newbot.

        3. Define a name for your bot, such as cntools_[POOLNAME]_alerts.

        4. Botfather will confirm the creation of your bot by giving you the unique bot access token. Keep it safe and private.

        5. Now send at least one direct message to your new bot.

        6. Open this URL in your browser by using your own, just created bot access token:

        https://api.telegram.org/bot<your-access-token>/getUpdates\n
        1. the result is a JSON. Look for the value of result.message.chat.id. This chat id should be a large integer number.

        This is all you need to enable your Telegram alerts in the scripts/env file - uncomment and add the chat ID to the TG_CHAT_ID user variable in the env file:

        ...\nTG_CHAT_ID=\"<YOUR_TG_CHAT_ID>\"\n...  \n

        "},{"location":"Scripts/topologyupdater/","title":"Topology Updater","text":"

        Reminder !!

        • Since the network has to get along without the P2P network module for the time being, it needs static topology files. This \"TopologyUpdater\" service, which is far from being perfect due to its centralization factor, is intended to be a temporary solution to allow everyone to activate their relay nodes without having to postpone and wait for manual topology completion requests.
        • You should NOT set up topologyUpdater for your block producing nodes.

        The topologyUpdater shell script must be executed on the relay node as a cronjob exactly every 60 minutes. After 4 consecutive requests (3 hours) the node is considered a new relay node in listed in the topology file. If the node is turned off, it's automatically delisted after 3 hours.

        "},{"location":"Scripts/topologyupdater/#download","title":"Download and Configure","text":"

        If you have run guild-deploy.sh, this should already be available in your scripts folder and make this step unnecessary.

        Before the updater can make a valid request to the central topology service, it must query the current tip/blockNo from the well-synced local node. It connects to your node through the configuration in the script as well as the common env configuration file. Customize these files for your needs.

        To download topologyUpdater.sh manually, you can execute the commands below and test executing Topology Updater once (it's OK if first execution gives back an error):

        cd $CNODE_HOME/scripts\ncurl -s -o topologyUpdater.sh https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/topologyUpdater.sh\ncurl -s -o env https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/env\nchmod 750 topologyUpdater.sh\n./topologyUpdater.sh\n

        "},{"location":"Scripts/topologyupdater/#modify","title":"Examine and modify the variables within topologyUpdater.sh script","text":"

        Out of the box, the scripts might come with some assumptions, that may or may not be valid for your environment. One of the common changes as an SPO would be to the complete CUSTOM_PEERS section as below to include your local relays/BP nodes (described in the How do I add my own nodes section), and any additional peers you'd like to be always available at minimum. Please do take time to update the variables in User Variables section in env & topologyUpdater.sh:

        ### topologyUpdater.sh\n\n######################################\n# User Variables - Change as desired #\n######################################\n\nCNODE_HOSTNAME=\"CHANGE ME\"                                # (Optional) Must resolve to the IP you are requesting from\nCNODE_VALENCY=1                                           # (Optional) for multi-IP hostnames\nMAX_PEERS=15                                              # Maximum number of peers to return on successful fetch\n#CUSTOM_PEERS=\"None\"                                      # Additional custom peers to (IP,port[,valency]) to add to your target topology.json\n# eg: \"10.0.0.1,3001|10.0.0.2,3002|relays.mydomain.com,3003,3\"\n#BATCH_AUTO_UPDATE=N                                      # Set to Y to automatically update the script if a new version is available without user interaction\n

        Any customisations you add above, will be saved across future guild-deploy.sh executions, unless you specify the -f flag to overwrite completely.

        "},{"location":"Scripts/topologyupdater/#deploy","title":"Deploy the script","text":"

        systemd service The script can be deployed as a background service in different ways but the recommended and easiest way if guild-deploy.sh was used, is to utilize the deploy-as-systemd.sh script to setup and schedule the execution. This will deploy both push & fetch service files as well as timers for a scheduled 60 min node alive message and cnode restart at the user set interval (default: 24 hours) when running the deploy script.

        • cnode-tu-push.service : pushes a node alive message to Topology Updater API
        • cnode-tu-push.timer : schedules the push service to execute once every hour
        • cnode-tu-fetch.service : fetches a fresh topology file before the cnode.service file is started/restarted
        • cnode-tu-restart.service : handles the restart of cardano-node (cnode.sh)
        • cnode-tu-restart.timer : schedules the cardano-node restart service, default every 24h

        systemctl list-timers can be used to to check the push and restart service schedule.

        crontab job Another way to deploy the topologyUpdater.sh script is as a crontab job. Add the script to be executed once per hour at a minute of your choice (eg xx:25 o'clock in the example below). The example below will handle both the fetch and push in a single call to the script once an hour. In addition to the below crontab job for topologyUpdater, it's expected that you also add a scheduled restart of the relay node to pick up a fresh topology file fetched by topologyUpdater script with relays that are alive and well.

        25 * * * * /opt/cardano/cnode/scripts/topologyUpdater.sh\n
        "},{"location":"Scripts/topologyupdater/#logs","title":"Logs","text":"

        You can check the last result of push message in logs/topologyUpdater_lastresult.json. If deployed as systemd service, use sudo journalctl -u <service> to check output from service.

        If one of the parameters is outside the allowed ranges, invalid or missing the returned JSON will tell you what needs to be fixed.

        Don't try to execute the script more often than once per hour. It's completely useless and may lead to a temporary blacklisting.

        "},{"location":"Scripts/topologyupdater/#why-does-my-topology-file-only-contain-iog-peers","title":"Why does my topology file only contain IOG peers?","text":"

        Each subscribed node (4 consecutive requests) is allowed to fetch a subset of other nodes to prove loyalty/stability of the relay. Until reaching this point, your fetch calls will only return IOG peers combined with any custom peers added in USER VARIABLES section of topologyUpdater.sh script

        The engineers of cardano-node network stack suggested to use around 20 peers. More peers create unnecessary and unwanted system load and delays.

        In its default setting, topologyUpdater returns a list of 15 remote peers.

        Note that the change in topology is only effective upon restart of your node. Make sure you account for some scheduled restarts on your relays, to help onboard newer relays onto the network (as described in the systemd section).

        "},{"location":"Scripts/topologyupdater/#how-do-i-add-my-own-relaysstatic-nodes-in-addition-to-dynamic-list-generated-by-topologyupdater","title":"How do I add my own relays/static nodes in addition to dynamic list generated by topologyUpdater?","text":"

        Most of the Stake Pool Operators may have few preferences (own relays, close friends, etc) that they would like to add to their topology by default. This is where the CUSTOM_PEERS variable in topologyUpdater.sh comes in. You can add a list of peers in the format of: hostname/IP:port[:valency] here and the output topology.json formed will already include the custom peers that you supplied. Every custom peer is defined in the form [address]:[port] and optional :[valency] (if not specified, the valency defaults to 1). Multiple custom peers are separated by |. An example of a valid CUSTOM_PEERS variable would be:

        CUSTOM_PEERS=\"foo.bar.io,3001,2|198.175.21.197,6001|36.233.3.89,6000\n
        The list above would add three custom peers with the specified addresses and ports, with the first one additionally specifying the optional valency parameter (in this case 2).

        "},{"location":"Scripts/topologyupdater/#how-are-the-peers-for-my-topology-file-selected","title":"How are the peers for my topology file selected?","text":"

        We calculate the distance on the Earth's surface from your node's IP to all subscribed peers. We then order the peers by distance (closest first) and start by selecting one peer. We then skip some, pick the next, skip, pick, skip, pick ... until we reach the end of the list (furthest away). The number of skipped records is calculated in a way to have the desired number of peers at the end.

        Every requesting node has its personal distance to all other nodes.

        We assume this should result in a well-distributed and interconnected peering network.

        "},{"location":"docker/build/","title":"Build","text":""},{"location":"docker/build/#intro","title":"Intro","text":"

        \ud83d\udca1 Docker containers are the fastest way to run a Cardano node in both \"Relay\" and \"Block-Producing\" (Pool) mode.

        "},{"location":"docker/build/#how-to-build","title":"How to build","text":"
        docker build -t cardanocommunity/cardano-node:latest - < dockerfile_bin\n
        "},{"location":"docker/build/#for-windows-users","title":"For Windows Users","text":"

        With Powershell on Windows, you can run docker by typing the following command:

        Get-Content dockerfile_bin  | docker build -t guild-operators/cardano-node:latest -\n
        "},{"location":"docker/build/#see-also","title":"See also","text":"

        Docker Tips

        Docker Official Docs

        "},{"location":"docker/docker/","title":"Overview","text":"

        Running your own Cardano node has never been so fast and easy.

        But first, a kind reminder to the security aspects of running docker containers.

        "},{"location":"docker/docker/#external-resources","title":"External resources","text":"
        • DockerHub Guild's images
        • YouTube Guild's Videos
        "},{"location":"docker/docker/#built-in-cardano-software","title":"\ud83d\udd14 Built-in Cardano software","text":"
        • cardano-address
        • cardano-cli
        • cardano-hw-cli
        • cardano-node
        • cardano-submit-api
        • mithril-client
        • mithril-signer
        "},{"location":"docker/docker/#mithril","title":"Mithril","text":""},{"location":"docker/docker/#built-in-tools","title":"\ud83d\udd14 Built-in tools","text":"
        • CNTools
        • gLiveView
        • CNCLI
        • Ogmios
        • Cardano Hardware CLI
        • Cardano Signer
        • Monitoring ready (with EKG and Prometheus)
        "},{"location":"docker/docker/#docker-splash-screen","title":"Docker Splash screen","text":""},{"location":"docker/docker/#cntools","title":"Cntools","text":""},{"location":"docker/docker/#gliveview","title":"gLiveView","text":""},{"location":"docker/docker/#gliveview-peers-analyzer","title":"gLiveView Peers analyzer","text":""},{"location":"docker/docker/#cncli","title":"CNCLI","text":""},{"location":"docker/docker/#strategy","title":"Guild Operators Docker strategy ( mainnet/ preview / preprod / guild)","text":"

        Modular docker images based on Debian.

        Based on the Guild's work the Cardano Node image is built in a single stage: -> dockerfile_bin

        • Uses guild-deploy.sh to:
        • Install the os prerequisites
        • Add the cardano software from release binaries
        • Add the guild's SPO tools and the node's configuration files.
        "},{"location":"docker/docker/#additional-docs","title":"Additional docs","text":"

        If you prefer to build the images your own than you can check:

        • Docker Build Documentation
        • Docker Tips
        "},{"location":"docker/docker/#port-mapping","title":"Port mapping","text":"

        The dockerfiles are located in ./files/docker/

        Node Ports Wallet Ports Flavor Node (6000) Wallet (8090) Debian Prometheus (12798) Prometheus (12798) EKG (12781)"},{"location":"docker/run/","title":"Run","text":""},{"location":"docker/run/#os-requirements","title":"OS Requirements","text":"
        • docker-ce installed - Get Docker.
        Private mode Public mode

        Note

        1) --entrypoint=bash # This option won't start the node's container but only the OS running (the node software wont actually start, you'll need to manually execute entrypoint.sh ), ready to get in (trough the command docker exec -it < container name or hash > /bin/bash) and play/explore around with it in command line mode. 2) all guild tools env variable can be used to start a new container using custom values by using the \"-e\" option. 3) CPU and RAM and Shared Memory allocation option for the container can be used when you start the container (i.e. --shm-size or --memory or --cpus official docker resource docs) 4) --env MITHRIL_DOWNLOAD=Y # This option will allow Mithril client to download the latest Mithril snapshot of the blockchain when the container starts and does not have a copy of the blockchain yet. This is useful when you want to start a new node from scratch and don't want to wait for the node to sync from the network. This option is only available for the mainnet, preprod, and preview networks.

        "},{"location":"docker/run/#use-cases","title":"Use Cases","text":"
        • Pool Management
        • Wallet Management
        • Node testing
        docker run --init -dit\n--name <YourCName>\n--security-opt=no-new-privileges\n-e NETWORK=mainnet\n-v <your_custom_path>:/opt/cardano/cnode/priv\n-v <your_custom_db_path>:/opt/cardano/cnode/db\ncardanocommunity/cardano-node\n
        "},{"location":"docker/run/#use-cases_1","title":"Use Cases:","text":"
        • Node Relay
        docker run --init -dit\n--name <YourCName>\n--security-opt=no-new-privileges\n-e NETWORK=mainnet\n-p 6000:6000\n-v <your_custom_path>:/opt/cardano/cnode/priv\n-v <your_custom_db_path>:/opt/cardano/cnode/db\ncardanocommunity/cardano-node\n
        • Node Relay with custom permanent cfg by passing the env variable CONFIG (Mapping your configuration folder as below will allow you to retain configurations if you update or delete your container)
        docker run --init -dit\n--name <YourCName>\n--security-opt=no-new-privileges\n-e NETWORK=mainnet\n-e CONFIG=/opt/cardano/cnode/priv/<your own configuration files>.yml\n-p 6000:6000\n-v <your_custom_path>:/opt/cardano/cnode/priv\n-v <your_custom_db_path>:/opt/cardano/cnode/db\ncardanocommunity/cardano-node\n
        "},{"location":"docker/security/","title":"Security","text":""},{"location":"docker/security/#docker-security-best-practices","title":"Docker Security best practices","text":""},{"location":"docker/security/#intro","title":"Intro","text":"

        On the security front, Docker developers are faced with different types of security attacks such as:

        • Kernel exploits: Since the host\u2019s kernel is shared in the container, a compromised container can attack the entire host.
        • Container breakouts: Caused when the user is able to escape the container namespace and interact with other processes on the host.
        • Denial-of-service attacks: Occur when some containers take up enough resources to hamper the functioning of other applications.
        • Poisoned images: Caused when an untrusted image is being run and a hacker is able to access application data and, potentially, the host itself.

        Docker containers are now being exploited to covertly mine for cryptocurrency, marking a shift from ransomware to cryptocurrency malware. As with all things in security, also Docker security is a moving target \u2014 so it\u2019s helpful to have access to up-to-date information, including experience-based best practices, for securing your containerized environments.

        "},{"location":"docker/security/#here-below-some-key-concepts","title":"Here below some key concepts:","text":"
        1. Use a Third-Party Security Tool Docker allows you to use containers from untrusted public repositories, which increases the need to scrutinize whether the container was created securely and whether it is free of any corrupt or malicious files. For this, use a multi-purpose security tool that gives extensive dev-to-production security controls.(keep reading below)

        2. Manage Vulnerability It is best to have a sound vulnerability management program that has multiple checks throughout the container lifecycle. Vulnerability management should incorporate quality gates to detect access issues and weaknesses for a potential exploit from dev-to-production environments.

        3. Monitor and Audit Container Activity It is vital to monitor the container ecosystem and detect suspicious activity. Container monitoring activities provide real-time reports that can help you react promptly to a security breach.

        4. Enable Docker Content Trust Docker Content Trustis a new feature incorporated into Docker 1.8. It is disabled by default, but once enabled, allows you to verify the integrity, authenticity, and publication date of all Docker images from the Docker Hub Registry.

        5. Use Docker Bench for Security You should consider Docker Bench for Security as your must-use script. Once the script is run, you will notice a lot of information regarding configuration best practices for deploying Docker containers that can be used to further secure your Docker server and containers.

        6. Resource Utilization To reduce performance impacts and denial-of-service attacks, it is a good practice to implement limits on the system resources that the containers can consume. If, for example, a web server is compromised, it helps to limit the impact to the other processes that are running on a host.

        7. RBAC RBAC is role-based access control. If you have multiple users accessing you enviroment, this is a must-have. It can be quite expensive to implement but portainer makes it super easy.

        "},{"location":"docker/security/#security-docker-best-practices","title":"Security Docker best practices:","text":""},{"location":"docker/security/#the-guild-docker-images-are-not-using-all-the-following-tips-due-to-functional-purpose","title":"The Guild Docker images are not using all the following tips due to functional purpose","text":"

        Guild tips:

        • NEVER NEVER NEVER expose Docker API publicly!!! (disabled by default)

        • Keep Docker Host Up-to-date

        • Reverse uptime: containers that are frequently shut down and replaced by new container are more difficult for hackers to attack.
        • Use a Firewall or Expose only the ports you need to be public.
        • Use a *Reverse Proxy
        • Do not Change **Docker Socket Ownership
        • Do not Run Docker Containers as Root
        • Use Trusted Docker Images
        • Use Privileged Mode Carefully (This is usually done by adding --privileged you can use --security-opt=no-new-privileges instead)

        Some more general tips:

        • Restrict container capabilities: \"--cap-drop ALL\"
        • Use Docker Secrets
        • Change DOCKER_OPTS to ***Respect IP Table Firewall
        • Control Docker Resource Usage
        • Rate Limit: is quite common to mitigate brute force or denial of service attacks.
        • Fail2ban: Fail2ban scans your log files and bans IP address that shows malicious intent
        • Container Vulnerability Scanner
        "},{"location":"docker/security/#notes","title":"Notes:","text":"
        • *Nginx is a very good choice as load balancer and/or reverse proxy.
        • **By default the socket is owned by root user and docker group.
        • *** On Ubuntu/Debian based systems, edit /etc/default/docker and add the following line: DOCKER_OPTS= \"--iptables=false\"
        "},{"location":"docker/tips/","title":"Tips","text":""},{"location":"docker/tips/#how-to-run-a-cardano-node-with-docker","title":"How to run a Cardano Node with Docker","text":"

        With this quick guide you will be able to run a cardano node in seconds and also have the powerfull Koios SPO scripts built-in.

        "},{"location":"docker/tips/#how-to-operate-interactively-within-the-container","title":"How to operate interactively within the container","text":"

        Once executed the container as a deamon with attached tty you are then able to enter the container by using the flag -dit .

        While if you have a hook within the container console, use the following command (change CN with your container name):

        docker exec -it CN bash 

        This command will bring you within the container bash env ready to use the Koios tools.

        "},{"location":"docker/tips/#docker-flags-explained","title":"Docker flags explained","text":"
        \"docker build\" options explained:\n -t : option is to \"tag\" the image you can name the image as you prefer as long as you maintain the references between dockerfiles.\n\n\"docker run\" options explained:\n -d : for detach the container\n -i : interactive enabled -t : terminal session enabled\n -e : set an Env Variable\n -p : set exposed ports (by default if not specified the ports will be reachable only internally)\n--hostname : Container's hostname\n --name : Container's name\n
        "},{"location":"docker/tips/#custom-container-with-your-own-cfg","title":"Custom container with your own cfg","text":"
        docker run --init -itd  \n-name Relay                                   # Optional (recommended for quick access): set a name for your newly created container.\n-p 9000:6000                                  # Optional: to expose the internal container's port (6000) to the host <IP> port 9000\n-e NETWORK=mainnet                            # Mandatory: mainnet / preprod / guild-mainnet / guild\n--security-opt=no-new-privileges              # Option to prevent privilege escalations\n-v <YourNetPath>:/opt/cardano/cnode/sockets   # Optional: useful to share the node socket with other containers\n-v <YourCfgPath>:/opt/cardano/cnode/priv      # Optional: if used has to contain all the sensitive keys needed to run a node as core\n-v <YourDBbk>:/opt/cardano/cnode/db           # Optional: if not set a fresh DB will be downloaded from scratch\ncardanocommunity/cardano-node:latest          # Mandatory: image to run\n

        Note

        To be able to use the CNTools encryption key feature you need to manually change in \"cntools.config\" ENABLE_CHATTR to \"true\" and not use the --security-opt=no-new-privileges docker run option.

        "},{"location":"docker/tips/#docker-cli-managment","title":"Docker CLI managment","text":""},{"location":"docker/tips/#official","title":"Official","text":"
        • docker inspect
        • docker ps
        • docker ls
        • docker stop
        "},{"location":"docker/tips/#un-official-docker-managment-cli-tool","title":"Un-Official Docker managment cli tool","text":"
        • Lazydocker
        "},{"location":"docker/tips/#docker-backups-and-restores","title":"Docker backups and restores","text":"

        The docker container has an optional backup and restore functionality that can be used to backup the /opt/cardano/cnode/db directory. To have the backup persist longer than the countainer, the backup directory should be mounted as a volume.

        [!NOTE] The backup and restore functionality is disabled by default.

        [!WARNING] Make sure adequate space exists on the host as the backup will double the space consumed by the database.

        "},{"location":"docker/tips/#creating-a-backup","title":"Creating a Backup","text":"

        When the container is started with the ENABLE_BACKUP environment variable set to Y the container will automatically create a backup in the /opt/cardano/cnode/backup/$NETWORK-db directory. The backup will be created when the container is started and if the backup directory is smaller than the db directory.

        "},{"location":"docker/tips/#restoring-from-a-backup","title":"Restoring from a Backup","text":"

        When the container is started with the ENABLE_RESTORE environment variable set to Y the container will automatically restore the latest backup from the /opt/cardano/cnode/backup/$NETWORK-db directory. The database will be restored when the container is started and if the backup directory is larger than the db directory.

        "}]} \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml index 215e289da..7df6a8b54 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -2,172 +2,172 @@ https://cardano-community.github.io/guild-operators/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/basics/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/build/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/contributors/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/grest-meets/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/sidebar/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/upgrade/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/Appendix/RecoverByronWallet/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/Appendix/monitoring/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/Appendix/postgres/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/Build/dbsync/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/Build/graphql/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/Build/grest-changelog/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/Build/grest/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/Build/node-cli/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/Build/offchain-metadata-tools/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/Build/wallet/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/Scripts/blockperf/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/Scripts/cncli/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/Scripts/cntools-changelog/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/Scripts/cntools-common/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/Scripts/cntools/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/Scripts/env/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/Scripts/gliveview/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/Scripts/itnrewards/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/Scripts/itnwitness/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/Scripts/logmonitor/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/Scripts/sendalerts/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/Scripts/topologyupdater/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/docker/build/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/docker/docker/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/docker/run/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/docker/security/ - 2023-11-03 + 2023-11-27 daily https://cardano-community.github.io/guild-operators/docker/tips/ - 2023-11-03 + 2023-11-27 daily \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz index 08edff8e4183dc13a690369e65f12d5551f1f0d2..745fc2dd677c393895aa254fd3c00c3584a5c6a0 100644 GIT binary patch literal 491 zcmVM3AO;c4g)B_kVR0`M@yn8AD-UFxH&_ifazH6nr*d#O^0|{B^j9y;>JEf+zG)yBIXG9z^h14I`LgnxBUO0UlzR9(D|1e(g%}jA9TV)I_m`QV>rV(z^oHL za4IlMM^ei_%LAm8Un1u1wxyGw34U|oSm#jya75)&sk>4tf h=w(VOxe44_-3q literal 490 zcmV=@MFh*3&jF0d@%ep>y9*Qwm`>H9*;hMA4t+@d zsrvE#OZ}mGZXb?Qi3Ge8B&I6PkVVv1%W7QdR@E zw`(~@bnR+|#Ei4W+k~~lahy`%rRf6?Q`ZDE{S>0FF+;J8h0M{cJ?{-!;t_rOxPN@A z4~KgHBr2QiPR$KA^>8lm)_NP_ax{068=?7)_^)6)OTlSlUfa5m9VRZJoAfP^z5sDO z07Y`9H*&Yem5AFX*7BnR^)k05%uzT>t<8