Skip to content

Latest commit

 

History

History
393 lines (393 loc) · 86 KB

configuration_options.md

File metadata and controls

393 lines (393 loc) · 86 KB

scylla-cluster-tests configuration options

Parameter Description Default Override environment
variable
config_files a list of config files that would be used N/A SCT_CONFIG_FILES
cluster_backend backend that will be used, aws/gce/docker N/A SCT_CLUSTER_BACKEND
test_method class.method used to run the test. Filled automatically with run-test sct command. N/A SCT_TEST_METHOD
test_duration Test duration (min). Parameter used to keep instances produced by tests
and for jenkins pipeline timeout and TimoutThread.
60 SCT_TEST_DURATION
prepare_stress_duration Time in minutes, which is required to run prepare stress commands
defined in prepare_*_cmd for dataset generation, and is used in
test duration calculation
300 SCT_PREPARE_STRESS_DURATION
stress_duration Time in minutes, Time of execution for stress commands from stress_cmd parameters
and is used in test duration calculation
N/A SCT_STRESS_DURATION
n_db_nodes Number list of database data nodes in multiple data centers. To use with
multi data centers and zero nodes, dc with zero-nodes only should be set as 0,
ex. "3 3 0".
N/A SCT_N_DB_NODES
n_test_oracle_db_nodes Number list of oracle test nodes in multiple data centers. 1 SCT_N_TEST_ORACLE_DB_NODES
n_loaders Number list of loader nodes in multiple data centers N/A SCT_N_LOADERS
n_monitor_nodes Number list of monitor nodes in multiple data centers N/A SCT_N_MONITORS_NODES
intra_node_comm_public If True, all communication between nodes are via public addresses N/A SCT_INTRA_NODE_COMM_PUBLIC
endpoint_snitch The snitch class scylla would use

'GossipingPropertyFileSnitch' - default
'Ec2MultiRegionSnitch' - default on aws backend
'GoogleCloudSnitch'
N/A SCT_ENDPOINT_SNITCH
user_credentials_path Path to your user credentials. qa key are downloaded automatically from S3 bucket N/A SCT_USER_CREDENTIALS_PATH
cloud_credentials_path Path to your user credentials. qa key are downloaded automatically from S3 bucket N/A SCT_CLOUD_CREDENTIALS_PATH
cloud_cluster_id scylla cloud cluster id N/A SCT_CLOUD_CLUSTER_ID
cloud_prom_bearer_token scylla cloud promproxy bearer_token to federate monitoring data into our monitoring instance N/A SCT_CLOUD_PROM_BEARER_TOKEN
cloud_prom_path scylla cloud promproxy path to federate monitoring data into our monitoring instance N/A SCT_CLOUD_PROM_PATH
cloud_prom_host scylla cloud promproxy hostname to federate monitoring data into our monitoring instance N/A SCT_CLOUD_PROM_HOST
ip_ssh_connections Type of IP used to connect to machine instances.
This depends on whether you are running your tests from a machine inside
your cloud provider, where it makes sense to use 'private', or outside (use 'public')

Default: Use public IPs to connect to instances (public)
Use private IPs to connect to instances (private)
Use IPv6 IPs to connect to instances (ipv6)
private SCT_IP_SSH_CONNECTIONS
scylla_repo Url to the repo of scylla version to install scylla. Can provide specific version after a colon e.g: https://s3.amazonaws.com/downloads.scylladb.com/deb/ubuntu/scylla-2021.1.list:2021.1.18 N/A SCT_SCYLLA_REPO
scylla_apt_keys APT keys for ScyllaDB repos ['17723034C56D4B19', '5E08FBD8B5D6EC9C', 'D0A112E067426AB2', '491C93B9DE7496A7', 'A43E06657BAC99E3'] SCT_SCYLLA_APT_KEYS
unified_package Url to the unified package of scylla version to install scylla N/A SCT_UNIFIED_PACKAGE
nonroot_offline_install Install Scylla without required root priviledge N/A SCT_NONROOT_OFFLINE_INSTALL
install_mode Scylla install mode, repo/offline/web repo SCT_INSTALL_MODE
scylla_version Version of scylla to install, ex. '2.3.1'
Automatically lookup AMIs and repo links for formal versions.
WARNING: can't be used together with 'scylla_repo' or 'ami_id_db_scylla'
N/A SCT_SCYLLA_VERSION
user_data_format_version Format version of the user-data to use for scylla images,
default to what tagged on the image used
N/A SCT_USER_DATA_FORMAT_VERSION
oracle_user_data_format_version Format version of the user-data to use for scylla images,
default to what tagged on the image used
N/A SCT_ORACLE_USER_DATA_FORMAT_VERSION
oracle_scylla_version Version of scylla to use as oracle cluster with gemini tests, ex. '3.0.11'
Automatically lookup AMIs for formal versions.
WARNING: can't be used together with 'ami_id_db_oracle'
2022.1.14 SCT_ORACLE_SCYLLA_VERSION
scylla_linux_distro The distro name and family name to use. Example: 'ubuntu-jammy' or 'debian-bookworm'. ubuntu-focal SCT_SCYLLA_LINUX_DISTRO
scylla_linux_distro_loader The distro name and family name to use. Example: 'ubuntu-jammy' or 'debian-bookworm'. ubuntu-jammy SCT_SCYLLA_LINUX_DISTRO_LOADER
assert_linux_distro_features List of distro features relevant to SCT test. Example: 'fips'. N/A SCT_ASSERT_LINUX_DISTRO_FEATURES
scylla_repo_m Url to the repo of scylla version to install scylla from for managment tests N/A SCT_SCYLLA_REPO_M
scylla_repo_loader Url to the repo of scylla version to install c-s for loader https://s3.amazonaws.com/downloads.scylladb.com/deb/ubuntu/scylla-5.2.list SCT_SCYLLA_REPO_LOADER
scylla_mgmt_address Url to the repo of scylla manager version to install for management tests N/A SCT_SCYLLA_MGMT_ADDRESS
scylla_mgmt_agent_address Url to the repo of scylla manager agent version to install for management tests N/A SCT_SCYLLA_MGMT_AGENT_ADDRESS
manager_version Branch of scylla manager server and agent to install. Options in defaults/manager_versions.yaml 3.4 SCT_MANAGER_VERSION
target_manager_version Branch of scylla manager server and agent to upgrade to. Options in defaults/manager_versions.yaml N/A SCT_TARGET_MANAGER_VERSION
manager_scylla_backend_version Branch of scylla db enterprise to install. Options in defaults/manager_versions.yaml 2024 SCT_MANAGER_SCYLLA_BACKEND_VERSION
scylla_mgmt_agent_version 3.4.0 SCT_SCYLLA_MGMT_AGENT_VERSION
scylla_mgmt_pkg Url to the scylla manager packages to install for management tests N/A SCT_SCYLLA_MGMT_PKG
stress_cmd_lwt_i Stress command for LWT performance test for INSERT baseline N/A SCT_STRESS_CMD_LWT_I
stress_cmd_lwt_d Stress command for LWT performance test for DELETE baseline N/A SCT_STRESS_CMD_LWT_D
stress_cmd_lwt_u Stress command for LWT performance test for UPDATE baseline N/A SCT_STRESS_CMD_LWT_U
stress_cmd_lwt_ine Stress command for LWT performance test for INSERT with IF NOT EXISTS N/A SCT_STRESS_CMD_LWT_INE
stress_cmd_lwt_uc Stress command for LWT performance test for UPDATE with IF N/A SCT_STRESS_CMD_LWT_UC
stress_cmd_lwt_ue Stress command for LWT performance test for UPDATE with IF EXISTS N/A SCT_STRESS_CMD_LWT_UE
stress_cmd_lwt_de Stress command for LWT performance test for DELETE with IF EXISTS N/A SCT_STRESS_CMD_LWT_DE
stress_cmd_lwt_dc Stress command for LWT performance test for DELETE with IF condition> N/A SCT_STRESS_CMD_LWT_DC
stress_cmd_lwt_mixed Stress command for LWT performance test for mixed lwt load N/A SCT_STRESS_CMD_LWT_MIXED
stress_cmd_lwt_mixed_baseline Stress command for LWT performance test for mixed lwt load baseline N/A SCT_STRESS_CMD_LWT_MIXED_BASELINE
use_cloud_manager When define true, will install scylla cloud manager N/A SCT_USE_CLOUD_MANAGER
use_ldap When defined true, LDAP is going to be used. N/A SCT_USE_LDAP
use_ldap_authorization When defined true, will create a docker container with LDAP and configure scylla.yaml to use it N/A SCT_USE_LDAP_AUTHORIZATION
use_ldap_authentication When defined true, will create a docker container with LDAP and configure scylla.yaml to use it N/A SCT_USE_LDAP_AUTHENTICATION
prepare_saslauthd When defined true, will install and start saslauthd service N/A SCT_PREPARE_SASLAUTHD
ldap_server_type This option indicates which server is going to be used for LDAP operations. [openldap, ms_ad] N/A SCT_LDAP_SERVER_TYPE
use_mgmt When define true, will install scylla management True SCT_USE_MGMT
parallel_node_operations When defined true, will run node operations in parallel. Supported operations: startup N/A SCT_PARALLEL_NODE_OPERATIONS
manager_prometheus_port Port to be used by the manager to contact Prometheus 5090 SCT_MANAGER_PROMETHEUS_PORT
target_scylla_mgmt_server_address Url to the repo of scylla manager version used to upgrade the manager server N/A SCT_TARGET_SCYLLA_MGMT_SERVER_ADDRESS
target_scylla_mgmt_agent_address Url to the repo of scylla manager version used to upgrade the manager agents N/A SCT_TARGET_SCYLLA_MGMT_AGENT_ADDRESS
update_db_packages A local directory of rpms to install a custom version on top of
the scylla installed (or from repo or from ami)
N/A SCT_UPDATE_DB_PACKAGES
monitor_branch The port of scylla management branch-4.8 SCT_MONITOR_BRANCH
db_type Db type to install into db nodes, scylla/cassandra scylla SCT_DB_TYPE
user_prefix the prefix of the name of the cloud instances, defaults to username N/A SCT_USER_PREFIX
ami_id_db_scylla_desc version name to report stats to Elasticsearch and tagged on cloud instances N/A SCT_AMI_ID_DB_SCYLLA_DESC
sct_public_ip Override the default hostname address of the sct test runner,
for the monitoring of the Nemesis.
can only work out of the box in AWS
N/A SCT_SCT_PUBLIC_IP
sct_ngrok_name Override the default hostname address of the sct test runner,
using ngrok server, see readme for more instructions
N/A SCT_NGROK_NAME
backtrace_decoding If True, all backtraces found in db nodes would be decoded automatically True SCT_BACKTRACE_DECODING
print_kernel_callstack Scylla will print kernel callstack to logs if True, otherwise, it will try and may print a message
that it failed to.
True SCT_PRINT_KERNEL_CALLSTACK
instance_provision instance_provision: spot | on_demand | spot_fleet spot SCT_INSTANCE_PROVISION
instance_provision_fallback_on_demand instance_provision_fallback_on_demand: create instance on_demand provision type if instance with selected 'instance_provision' type creation failed. Expected values: true | false (default - false N/A SCT_INSTANCE_PROVISION_FALLBACK_ON_DEMAND
reuse_cluster If reuse_cluster is set it should hold test_id of the cluster that will be reused.
reuse_cluster: 7dc6db84-eb01-4b61-a946-b5c72e0f6d71
N/A SCT_REUSE_CLUSTER
test_id test id to filter by N/A SCT_TEST_ID
db_nodes_shards_selection How to select number of shards of Scylla. Expected values: default/random.
Default value: 'default'.
In case of random option - Scylla will start with different (random) shards on every node of the cluster
default SCT_NODES_SHARDS_SELECTION
seeds_selector How to select the seeds. Expected values: random/first/all all SCT_SEEDS_SELECTOR
seeds_num Number of seeds to select 1 SCT_SEEDS_NUM
email_recipients list of email of send the performance regression test to ['[email protected]'] SCT_EMAIL_RECIPIENTS
email_subject_postfix Email subject postfix N/A SCT_EMAIL_SUBJECT_POSTFIX
enable_test_profiling Turn on sct profiling N/A SCT_ENABLE_TEST_PROFILING
ssh_transport Set type of ssh library to use. Could be 'fabric' (default) or 'libssh2' libssh2 SSH_TRANSPORT
experimental_features unlock specified experimental features N/A SCT_EXPERIMENTAL_FEATURES
server_encrypt when enable scylla will use encryption on the server side N/A SCT_SERVER_ENCRYPT
client_encrypt when enable scylla will use encryption on the client side N/A SCT_CLIENT_ENCRYPT
client_encrypt_mtls when enabled scylla will enforce mutual authentication when client-to-node encryption is enabled N/A SCT_CLIENT_ENCRYPT_MTLS
server_encrypt_mtls when enabled scylla will enforce mutual authentication when node-to-node encryption is enabled N/A SCT_SERVER_ENCRYPT_MTLS
hinted_handoff when enable or disable scylla hinted handoff (enabled/disabled) disabled SCT_HINTED_HANDOFF
authenticator which authenticator scylla will use AllowAllAuthenticator/PasswordAuthenticator N/A SCT_AUTHENTICATOR
authenticator_user the username if PasswordAuthenticator is used N/A SCT_AUTHENTICATOR_USER
authenticator_password the password if PasswordAuthenticator is used N/A SCT_AUTHENTICATOR_PASSWORD
authorizer which authorizer scylla will use AllowAllAuthorizer/CassandraAuthorizer N/A SCT_AUTHORIZER
sla run SLA nemeses if the test is SLA only N/A SCT_SLA
service_level_shares List if service level shares - how many server levels to create and test. Uses in SLA test.list of int, like: [100, 200] [1000] SCT_SERVICE_LEVEL_SHARES
alternator_port Port to configure for alternator in scylla.yaml N/A SCT_ALTERNATOR_PORT
dynamodb_primarykey_type Type of dynamodb table to create with range key or not, can be:
HASH,HASH_AND_RANGE
HASH SCT_DYNAMODB_PRIMARYKEY_TYPE
alternator_write_isolation Set the write isolation for the alternator table, see https://github.com/scylladb/scylla/blob/master/docs/alternator/alternator.md#write-isolation-policies for more details N/A SCT_ALTERNATOR_WRITE_ISOLATION
alternator_use_dns_routing If true, spawn a docker with a dns server for the ycsb loader to point to N/A SCT_ALTERNATOR_USE_DNS_ROUTING
alternator_enforce_authorization If true, enable the authorization check in dynamodb api (alternator) N/A SCT_ALTERNATOR_ENFORCE_AUTHORIZATION
alternator_access_key_id the aws_access_key_id that would be used for alternator N/A SCT_ALTERNATOR_ACCESS_KEY_ID
alternator_secret_access_key the aws_secret_access_key that would be used for alternator N/A SCT_ALTERNATOR_SECRET_ACCESS_KEY
region_aware_loader When in multi region mode, run stress on loader that is located in the same region as db node N/A SCT_REGION_AWARE_LOADER
append_scylla_args More arguments to append to scylla command line --blocked-reactor-notify-ms 25 --abort-on-lsa-bad-alloc 1 --abort-on-seastar-bad-alloc --abort-on-internal-error 1 --abort-on-ebadf 1 --enable-sstable-key-validation 1 SCT_APPEND_SCYLLA_ARGS
append_scylla_args_oracle More arguments to append to oracle command line --enable-cache false SCT_APPEND_SCYLLA_ARGS_ORACLE
append_scylla_yaml More configuration to append to /etc/scylla/scylla.yaml N/A SCT_APPEND_SCYLLA_YAML
append_scylla_node_exporter_args More arguments to append to scylla-node-exporter command line N/A SCT_SCYLLA_NODE_EXPORTER_ARGS
nemesis_class_name Nemesis class to use (possible types in sdcm.nemesis).
Next syntax supporting:
- nemesis_class_name: "NemesisName" Run one nemesis in single thread
- nemesis_class_name: ":" Run in
parallel threads on different nodes. Ex.: "ChaosMonkey:2"
- nemesis_class_name: ": :" Run
in parallel threads and in
parallel threads. Ex.: "DisruptiveMonkey:1 NonDisruptiveMonkey:2"
NoOpMonkey SCT_NEMESIS_CLASS_NAME
nemesis_interval Nemesis sleep interval to use if None provided specifically in the test 5 SCT_NEMESIS_INTERVAL
nemesis_sequence_sleep_between_ops Sleep interval between nemesis operations for use in unique_sequence nemesis kind of tests N/A SCT_NEMESIS_SEQUENCE_SLEEP_BETWEEN_OPS
nemesis_during_prepare Run nemesis during prepare stage of the test True SCT_NEMESIS_DURING_PREPARE
nemesis_seed A seed number in order to repeat nemesis sequence as part of SisyphusMonkey N/A SCT_NEMESIS_SEED
nemesis_add_node_cnt Add/remove nodes during GrowShrinkCluster nemesis 1 SCT_NEMESIS_ADD_NODE_CNT
nemesis_grow_shrink_instance_type Instance type to use for adding/removing nodes during GrowShrinkCluster nemesis N/A SCT_NEMESIS_GROW_SHRINK_INSTANCE_TYPE
cluster_target_size Used for scale test: max size of the cluster N/A SCT_CLUSTER_TARGET_SIZE
space_node_threshold Space node threshold before starting nemesis (bytes)
The default value is 6GB (6x1024^3 bytes)
This value is supposed to reproduce
scylladb/scylladb#1140
N/A SCT_SPACE_NODE_THRESHOLD
nemesis_filter_seeds If true runs the nemesis only on non seed nodes N/A SCT_NEMESIS_FILTER_SEEDS
stress_cmd cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_CMD
gemini_schema_url Url of the schema/configuration the gemini tool would use N/A SCT_GEMINI_SCHEMA_URL
gemini_cmd gemini command to run (for now used only in GeminiTest) N/A SCT_GEMINI_CMD
gemini_seed Seed number for gemini command N/A SCT_GEMINI_SEED
gemini_table_options table options for created table. example:
["cdc={'enabled': true}"]
["cdc={'enabled': true}", "compaction={'class': 'IncrementalCompactionStrategy'}"]
N/A SCT_GEMINI_TABLE_OPTIONS
instance_type_loader AWS image type of the loader node N/A SCT_INSTANCE_TYPE_LOADER
instance_type_monitor AWS image type of the monitor node N/A SCT_INSTANCE_TYPE_MONITOR
instance_type_db AWS image type of the db node N/A SCT_INSTANCE_TYPE_DB
instance_type_db_oracle AWS image type of the oracle node N/A SCT_INSTANCE_TYPE_DB_ORACLE
instance_type_runner instance type of the sct-runner node N/A SCT_INSTANCE_TYPE_RUNNER
region_name AWS regions to use N/A SCT_REGION_NAME
security_group_ids AWS security groups ids to use N/A SCT_SECURITY_GROUP_IDS
use_placement_group if true, create 'cluster' placement group for test case for low-latency network performance achievement N/A SCT_USE_PLACEMENT_GROUP
subnet_id AWS subnet ids to use N/A SCT_SUBNET_ID
ami_id_db_scylla AMS AMI id to use for scylla db node N/A SCT_AMI_ID_DB_SCYLLA
ami_id_loader AMS AMI id to use for loader node N/A SCT_AMI_ID_LOADER
ami_id_monitor AMS AMI id to use for monitor node N/A SCT_AMI_ID_MONITOR
ami_id_db_cassandra AMS AMI id to use for cassandra node N/A SCT_AMI_ID_DB_CASSANDRA
ami_id_db_oracle AMS AMI id to use for oracle node N/A SCT_AMI_ID_DB_ORACLE
root_disk_size_db N/A SCT_ROOT_DISK_SIZE_DB
root_disk_size_monitor N/A SCT_ROOT_DISK_SIZE_MONITOR
root_disk_size_loader N/A SCT_ROOT_DISK_SIZE_LOADER
root_disk_size_runner root disk size in Gb for sct-runner N/A SCT_ROOT_DISK_SIZE_RUNNER
ami_db_scylla_user N/A SCT_AMI_DB_SCYLLA_USER
ami_monitor_user N/A SCT_AMI_MONITOR_USER
ami_loader_user N/A SCT_AMI_LOADER_USER
ami_db_cassandra_user N/A SCT_AMI_DB_CASSANDRA_USER
spot_max_price The max percentage of the on demand price we set for spot/fleet instances N/A SCT_SPOT_MAX_PRICE
extra_network_interface if true, create extra network interface on each node N/A SCT_EXTRA_NETWORK_INTERFACE
aws_instance_profile_name_db This is the name of the instance profile to set on all db instances N/A SCT_AWS_INSTANCE_PROFILE_NAME_DB
aws_instance_profile_name_loader This is the name of the instance profile to set on all loader instances N/A SCT_AWS_INSTANCE_PROFILE_NAME_LOADER
backup_bucket_backend the backend to be used for backup (e.g., 's3', 'gcs' or 'azure') N/A SCT_BACKUP_BUCKET_BACKEND
backup_bucket_location the bucket name to be used for backup (e.g., 'manager-backup-tests') N/A SCT_BACKUP_BUCKET_LOCATION
backup_bucket_region the AWS region of a bucket to be used for backup (e.g., 'eu-west-1') N/A SCT_BACKUP_BUCKET_REGION
use_prepared_loaders If True, we use prepared VMs for loader (instead of using docker images) N/A SCT_USE_PREPARED_LOADERS
scylla_d_overrides_files list of files that should upload to /etc/scylla.d/ directory to override scylla config files N/A SCT_scylla_d_overrides_files
gce_project gcp project name to use N/A SCT_GCE_PROJECT
gce_datacenter Supported: us-east1 - means that the zone will be selected automatically or you can mention the zone explicitly, for example: us-east1-b N/A SCT_GCE_DATACENTER
gce_network N/A SCT_GCE_NETWORK
gce_image_db N/A SCT_GCE_IMAGE_DB
gce_image_monitor N/A SCT_GCE_IMAGE_MONITOR
gce_image_loader N/A SCT_GCE_IMAGE_LOADER
gce_image_username N/A SCT_GCE_IMAGE_USERNAME
gce_instance_type_loader N/A SCT_GCE_INSTANCE_TYPE_LOADER
gce_root_disk_type_loader N/A SCT_GCE_ROOT_DISK_TYPE_LOADER
gce_n_local_ssd_disk_loader N/A SCT_GCE_N_LOCAL_SSD_DISK_LOADER
gce_instance_type_monitor N/A SCT_GCE_INSTANCE_TYPE_MONITOR
gce_root_disk_type_monitor N/A SCT_GCE_ROOT_DISK_TYPE_MONITOR
gce_n_local_ssd_disk_monitor N/A SCT_GCE_N_LOCAL_SSD_DISK_MONITOR
gce_instance_type_db N/A SCT_GCE_INSTANCE_TYPE_DB
gce_root_disk_type_db N/A SCT_GCE_ROOT_DISK_TYPE_DB
gce_n_local_ssd_disk_db N/A SCT_GCE_N_LOCAL_SSD_DISK_DB
gce_pd_standard_disk_size_db N/A SCT_GCE_PD_STANDARD_DISK_SIZE_DB
gce_pd_ssd_disk_size_db N/A SCT_GCE_PD_SSD_DISK_SIZE_DB
gce_setup_hybrid_raid If True, SCT configures a hybrid RAID of NVMEs and an SSD for scylla's data N/A SCT_GCE_SETUP_HYBRID_RAID
gce_pd_ssd_disk_size_loader N/A SCT_GCE_PD_SSD_DISK_SIZE_LOADER
gce_pd_ssd_disk_size_monitor N/A SCT_GCE_SSD_DISK_SIZE_MONITOR
azure_region_name Supported: eastus N/A SCT_AZURE_REGION_NAME
azure_instance_type_loader N/A SCT_AZURE_INSTANCE_TYPE_LOADER
azure_instance_type_monitor N/A SCT_AZURE_INSTANCE_TYPE_MONITOR
azure_instance_type_db N/A SCT_AZURE_INSTANCE_TYPE_DB
azure_instance_type_db_oracle N/A SCT_AZURE_INSTANCE_TYPE_DB_ORACLE
azure_image_db N/A SCT_AZURE_IMAGE_DB
azure_image_monitor N/A SCT_AZURE_IMAGE_MONITOR
azure_image_loader N/A SCT_AZURE_IMAGE_LOADER
azure_image_username N/A SCT_AZURE_IMAGE_USERNAME
eks_service_ipv4_cidr N/A SCT_EKS_SERVICE_IPV4_CIDR
eks_vpc_cni_version N/A SCT_EKS_VPC_CNI_VERSION
eks_role_arn N/A SCT_EKS_ROLE_ARN
eks_cluster_version N/A SCT_EKS_CLUSTER_VERSION
eks_nodegroup_role_arn N/A SCT_EKS_NODEGROUP_ROLE_ARN
gke_cluster_version N/A SCT_GKE_CLUSTER_VERSION
gke_k8s_release_channel K8S release channel name to be used. Expected values are: 'rapid', 'regular', 'stable' and '' (static / No channel). N/A SCT_GKE_K8S_RELEASE_CHANNEL
k8s_scylla_utils_docker_image Docker image to be used by Scylla operator to tune K8S nodes for performance. Used when k8s_enable_performance_tuning' is defined to 'True'. If not set then the default from operator will be used. N/A SCT_K8S_SCYLLA_UTILS_DOCKER_IMAGE
k8s_enable_performance_tuning Define whether performance tuning must run or not. N/A SCT_K8S_ENABLE_PERFORMANCE_TUNING
k8s_deploy_monitoring N/A SCT_K8S_DEPLOY_MONITORING
k8s_local_volume_provisioner_type Defines the type of the K8S local volume provisioner to be deployed. It may be either 'static' or 'dynamic'. Details about 'dynamic': 'dynamic': https://github.com/scylladb/k8s-local-volume-provisioner; 'static': sdcm/k8s_configs/static-local-volume-provisioner.yaml N/A SCT_K8S_LOCAL_VOLUME_PROVISIONER_TYPE
k8s_scylla_operator_docker_image Docker image to be used for installation of scylla operator. N/A SCT_K8S_SCYLLA_OPERATOR_DOCKER_IMAGE
k8s_scylla_operator_upgrade_docker_image Docker image to be used for upgrade of scylla operator. N/A SCT_K8S_SCYLLA_OPERATOR_UPGRADE_DOCKER_IMAGE
k8s_scylla_operator_helm_repo Link to the Helm repository where to get 'scylla-operator' charts from. N/A SCT_K8S_SCYLLA_OPERATOR_HELM_REPO
k8s_scylla_operator_upgrade_helm_repo Link to the Helm repository where to get 'scylla-operator' charts for upgrade. N/A SCT_K8S_SCYLLA_OPERATOR_UPGRADE_HELM_REPO
k8s_scylla_operator_chart_version Version of 'scylla-operator' Helm chart to use. If not set then latest one will be used. N/A SCT_K8S_SCYLLA_OPERATOR_CHART_VERSION
k8s_scylla_operator_upgrade_chart_version Version of 'scylla-operator' Helm chart to use for upgrade. N/A SCT_K8S_SCYLLA_OPERATOR_UPGRADE_CHART_VERSION
k8s_functional_test_dataset Defines whether dataset uses for pre-fill cluster in functional test. Defined in sdcm.utils.sstable.load_inventory. Expected values: BIG_SSTABLE_MULTI_COLUMNS_DATA, MULTI_COLUMNS_DATA N/A SCT_K8S_FUNCTIONAL_TEST_DATASET
k8s_scylla_cpu_limit The CPU limit that will be set for each Scylla cluster deployed in K8S. If not set, then will be autocalculated. Example: '500m' or '2' N/A SCT_K8S_SCYLLA_CPU_LIMIT
k8s_scylla_memory_limit The memory limit that will be set for each Scylla cluster deployed in K8S. If not set, then will be autocalculated. Example: '16384Mi' N/A SCT_K8S_SCYLLA_MEMORY_LIMIT
k8s_scylla_cluster_name N/A SCT_K8S_SCYLLA_CLUSTER_NAME
k8s_n_scylla_pods_per_cluster Number of loader pods per loader cluster. 3 K8S_N_SCYLLA_PODS_PER_CLUSTER
k8s_scylla_disk_gi N/A SCT_K8S_SCYLLA_DISK_GI
k8s_scylla_disk_class N/A SCT_K8S_SCYLLA_DISK_CLASS
k8s_loader_cluster_name N/A SCT_K8S_LOADER_CLUSTER_NAME
k8s_n_loader_pods_per_cluster Number of loader pods per loader cluster. N/A SCT_K8S_N_LOADER_PODS_PER_CLUSTER
k8s_loader_run_type Defines how the loader pods must run. It may be either 'static' (default, run stress command on the constantly existing idle pod having reserved resources, perf-oriented) or 'dynamic' (run stress commad in a separate pod as main thread and get logs in a searate retryable API call not having resource reservations). dynamic SCT_K8S_LOADER_RUN_TYPE
k8s_instance_type_auxiliary Instance type for the nodes of the K8S auxiliary/default node pool. N/A SCT_K8S_INSTANCE_TYPE_AUXILIARY
k8s_instance_type_monitor Instance type for the nodes of the K8S monitoring node pool. N/A SCT_K8S_INSTANCE_TYPE_MONITOR
mini_k8s_version N/A SCT_MINI_K8S_VERSION
k8s_cert_manager_version N/A SCT_K8S_CERT_MANAGER_VERSION
k8s_minio_storage_size 10Gi SCT_K8S_MINIO_STORAGE_SIZE
k8s_log_api_calls Defines whether the K8S API server logging must be enabled and it's logs gathered. Be aware that it may be really huge set of data. N/A SCT_K8S_LOG_API_CALLS
k8s_tenants_num Number of Scylla clusters to create in the K8S cluster. 1 SCT_TENANTS_NUM
k8s_enable_tls Defines whether we enable the scylla operator TLS feature or not. N/A SCT_K8S_ENABLE_TLS
k8s_enable_sni Defines whether we install SNI and use it or not (serverless feature). N/A SCT_K8S_ENABLE_SNI
k8s_enable_alternator Defines whether we enable the alternator feature using scylla-operator or not. N/A SCT_K8S_ENABLE_ALTERNATOR
k8s_connection_bundle_file Serverless configuration bundle file N/A SCT_K8S_CONNECTION_BUNDLE_FILE
k8s_db_node_service_type Defines the type of the K8S 'Service' objects type used for ScyllaDB pods. Empty value means 'do not set and allow scylla-operator to choose'. N/A SCT_K8S_DB_NODE_SERVICE_TYPE
k8s_db_node_to_node_broadcast_ip_type Defines the source of the IP address to be used for the 'broadcast_address' config option in the 'scylla.yaml' files. Empty value means 'do not set and allow scylla-operator to choose'. N/A SCT_K8S_DB_NODE_TO_NODE_BROADCAST_IP_TYPE
k8s_db_node_to_client_broadcast_ip_type Defines the source of the IP address to be used for the 'broadcast_rpc_address' config option in the 'scylla.yaml' files. Empty value means 'do not set and allow scylla-operator to choose'. N/A SCT_K8S_DB_NODE_TO_CLIENT_BROADCAST_IP_TYPE
k8s_use_chaos_mesh enables chaos-mesh for k8s testing N/A SCT_K8S_USE_CHAOS_MESH
k8s_n_auxiliary_nodes Number of nodes in auxiliary pool N/A SCT_K8S_N_AUXILIARY_NODES
k8s_n_monitor_nodes Number of nodes in monitoring pool that will be used for scylla-operator's deployed monitoring pods. N/A SCT_K8S_N_MONITOR_NODES
mgmt_docker_image Scylla manager docker image, i.e. 'scylladb/scylla-manager:2.2.1' scylladb/scylla-manager:3.4.0 SCT_MGMT_DOCKER_IMAGE
docker_image Scylla docker image repo, i.e. 'scylladb/scylla', if omitted is calculated from scylla_version N/A SCT_DOCKER_IMAGE
docker_network local docker network to use, if there's need to have db cluster connect to other services running in docker N/A SCT_DOCKER_NETWORK
s3_baremetal_config N/A SCT_S3_BAREMETAL_CONFIG
db_nodes_private_ip N/A SCT_DB_NODES_PRIVATE_IP
db_nodes_public_ip N/A SCT_DB_NODES_PUBLIC_IP
loaders_private_ip N/A SCT_LOADERS_PRIVATE_IP
loaders_public_ip N/A SCT_LOADERS_PUBLIC_IP
monitor_nodes_private_ip N/A SCT_MONITOR_NODES_PRIVATE_IP
monitor_nodes_public_ip N/A SCT_MONITOR_NODES_PUBLIC_IP
cassandra_stress_population_size 1000000 SCT_CASSANDRA_STRESS_POPULATION_SIZE
cassandra_stress_threads 1000 SCT_CASSANDRA_STRESS_THREADS
add_node_cnt 1 SCT_ADD_NODE_CNT
stress_multiplier Number of cassandra-stress processes 1 SCT_STRESS_MULTIPLIER
stress_multiplier_w Number of cassandra-stress processes for write workload 1 SCT_STRESS_MULTIPLIER_W
stress_multiplier_r Number of cassandra-stress processes for read workload 1 SCT_STRESS_MULTIPLIER_R
stress_multiplier_m Number of cassandra-stress processes for mixed workload 1 SCT_STRESS_MULTIPLIER_M
run_fullscan N/A SCT_RUN_FULLSCAN
run_full_partition_scan Runs a background thread that issues reversed-queries on a table random partition by an interval N/A SCT_run_full_partition_scan
run_tombstone_gc_verification Runs a background thread that verifies Tombstones GC on a table by an interval N/A SCT_RUN_TOMBSTONE_GC_VERIFICATION
keyspace_num 1 SCT_KEYSPACE_NUM
round_robin N/A SCT_ROUND_ROBIN
batch_size 1 SCT_BATCH_SIZE
pre_create_schema N/A SCT_PRE_CREATE_SCHEMA
pre_create_keyspace Command to create keysapce to be pre-create before running workload N/A SCT_PRE_CREATE_KEYSPACE
post_prepare_cql_cmds CQL Commands to run after prepare stage finished (relevant only to longevity_test.py) N/A SCT_POST_PREPARE_CQL_CMDS
prepare_wait_no_compactions_timeout At the end of prepare stage, run major compaction and wait for this time (in minutes) for compaction to finish. (relevant only to longevity_test.py), Should be use only for when facing issue like compaction is affect the test or load N/A SCT_PREPARE_WAIT_NO_COMPACTIONS_TIMEOUT
compaction_strategy Choose a specific compaction strategy to pre-create schema with. SizeTieredCompactionStrategy SCT_COMPACTION_STRATEGY
sstable_size Configure sstable size for the usage of pre-create-schema mode N/A SSTABLE_SIZE
cluster_health_check When true, start cluster health checker for all nodes True SCT_CLUSTER_HEALTH_CHECK
data_validation A group of sub-parameters: validate_partitions, table_name, primary_key_column,
partition_range_with_data_validation, max_partitions_in_test_table.
1. validate_partitions - when true, validating the same number of rows-per-partition before/after a Nemesis.
2. table_name - table name to check for the validate_partitions check.
3. primary_key_column - primary key of the table to check for the validate_partitions check
4. partition_range_with_data_validation - Relevant for scylla-bench. A range (min - max) of PK values
for partitions to be validated by reads and not to be deleted during test. Example: 0-250.
5. max_partitions_in_test_table - Relevant for scylla-bench. Max partition keys (partition-count)
in the scylla_bench.test table.
N/A SCT_DATA_VALIDATION
stress_read_cmd cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_READ_CMD
prepare_verify_cmd cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_PREPARE_VERIFY_CMD
user_profile_table_count number of tables to create for template user c-s 1 SCT_USER_PROFILE_TABLE_COUNT
scylla_mgmt_upgrade_to_repo Url to the repo of scylla manager version to upgrade to for management tests N/A SCT_SCYLLA_MGMT_UPGRADE_TO_REPO
mgmt_restore_extra_params Manager restore operation extra parameters: batch-size, parallel, etc.For example, --batch-size 2 --parallel 1. Provided string appends the restore cmd N/A SCT_MGMT_RESTORE_EXTRA_PARAMS
mgmt_agent_backup_config Manager agent backup general configuration: checkers, transfers, low_level_retries. For example, {'checkers': 100, 'transfers': 2, 'low_level_retries': 20} N/A SCT_MGMT_AGENT_BACKUP_CONFIG
mgmt_reuse_backup_snapshot_name Name of backup snapshot to use in Manager restore benchmark test, for example, 500gb_2t_ics. The name provides the info about dataset size (500gb), tables number (2) and compaction (ICS) N/A SCT_MGMT_REUSE_BACKUP_SNAPSHOT_NAME
mgmt_skip_post_restore_stress_read Skip post-restore c-s verification read in the Manager restore benchmark tests N/A SCT_MGMT_SKIP_POST_RESTORE_STRESS_READ
mgmt_nodetool_refresh_flags Nodetool refresh extra options like --load-and-stream or --primary-replica-only N/A SCT_MGMT_NODETOOL_REFRESH_FLAGS
mgmt_nodetool_refresh_flags Size of backup snapshot in Gb to be prepared to be prepared for backup N/A SCT_MGMT_PREPARE_SNAPSHOT_SIZE
stress_cmd_w cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_CMD_W
stress_cmd_r cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_CMD_R
stress_cmd_m cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_CMD_M
stress_cmd_cache_warmup cassandra-stress commands for warm-up before read workload.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_CMD_CACHE_WARM_UP
prepare_write_cmd cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_PREPARE_WRITE_CMD
stress_cmd_no_mv cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_CMD_NO_MV
stress_cmd_no_mv_profile N/A SCT_STRESS_CMD_NO_MV_PROFILE
perf_extra_jobs_to_compare jobs to compare performance results with, for example if running in staging, we still can compare with official jobs N/A SCT_PERF_EXTRA_JOBS_TO_COMPARE
cs_user_profiles cassandra-stress user-profiles list. Executed in test step N/A SCT_CS_USER_PROFILES
prepare_cs_user_profiles cassandra-stress user-profiles list. Executed in prepare step N/A SCT_PREPARE_CS_USER_PROFILES
cs_duration 50m SCT_CS_DURATION
cs_debug enable debug for cassandra-stress N/A SCT_CS_DEBUG
stress_cmd_mv cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_CMD_MV
prepare_stress_cmd cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_PREPARE_STRESS_CMD
perf_gradual_threads Threads amount of c-s load for gradual performance test per sub-test. Example: {'read': 100, 'write': 200, 'mixed': 300} N/A SCT_PERF_GRADUAL_THREADS
perf_gradual_throttle_steps Used for gradual performance test. Define throttle for load step in ops. Example: {'read': ['100000', '150000'], 'mixed': ['300']} N/A SCT_PERF_GRADUAL_THROTTLE_STEPS
skip_download N/A SCT_SKIP_DOWNLOAD
sstable_file N/A SCT_SSTABLE_FILE
sstable_url N/A SCT_SSTABLE_URL
sstable_md5 N/A SCT_SSTABLE_MD5
flush_times N/A SCT_FLUSH_TIMES
flush_period N/A SCT_FLUSH_PERIOD
new_scylla_repo N/A SCT_NEW_SCYLLA_REPO
new_version Assign new upgrade version, use it to upgrade to specific minor release. eg: 3.0.1 N/A SCT_NEW_VERSION
target_upgrade_version Assign target upgrade version, use for decide if the truncate entries test should be run. This test should be performed in case the target upgrade version >= 3.1 N/A SCT_TARGET_UPGRADE_VERSION
disable_raft As for now, raft will be enable by default in all [upgrade] tests, so this flag will allow usto still run [upgrade] test without raft enabled (or disabling raft), so we will have bettercoverage True SCT_DISABLE_RAFT
enable_tablets_on_upgrade By default, the tablets feature is disabled. With this parameter, created for the upgrade test,the tablets feature will only be enabled after the upgrade N/A SCT_ENABLE_TABLETS_ON_UPGRADE
upgrade_node_packages N/A SCT_UPGRADE_NODE_PACKAGES
upgrade_node_system Upgrade system packages on nodes before upgrading Scylla. Enabled by default N/A SCT_UPGRADE_NODE_SYSTEM
test_sst3 N/A SCT_TEST_SST3
test_upgrade_from_installed_3_1_0 Enable an option for installed 3.1.0 for work around a scylla issue if it's true N/A SCT_TEST_UPGRADE_FROM_INSTALLED_3_1_0
recover_system_tables N/A SCT_RECOVER_SYSTEM_TABLES
stress_cmd_1 cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_CMD_1
stress_cmd_complex_prepare cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_CMD_COMPLEX_PREPARE
prepare_write_stress cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_PREPARE_WRITE_STRESS
stress_cmd_read_10m cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_CMD_READ_10M
stress_cmd_read_cl_one cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
N/A SCT_STRESS_CMD_READ_CL_ONE
stress_cmd_read_60m cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_CMD_READ_60M
stress_cmd_complex_verify_read cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_CMD_COMPLEX_VERIFY_READ
stress_cmd_complex_verify_more cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_CMD_COMPLEX_VERIFY_MORE
write_stress_during_entire_test cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_WRITE_STRESS_DURING_ENTIRE_TEST
verify_data_after_entire_test cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
N/A SCT_VERIFY_DATA_AFTER_ENTIRE_TEST
stress_cmd_read_cl_quorum cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_CMD_READ_CL_QUORUM
verify_stress_after_cluster_upgrade cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_VERIFY_STRESS_AFTER_CLUSTER_UPGRADE
stress_cmd_complex_verify_delete cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_CMD_COMPLEX_VERIFY_DELETE
scylla_encryption_options options will be used for enable encryption at-rest for tables N/A SCT_SCYLLA_ENCRYPTION_OPTIONS
kms_key_rotation_interval The time interval in minutes which gets waited before the KMS key rotation happens. Applied when the AWS KMS service is configured to be used. N/A SCT_KMS_KEY_ROTATION_INTERVAL
enterprise_disable_kms An escape hatch to disable KMS for enterprise run, when needed, we enable kms by default since if we use scylla 2023.1.3 and up N/A SCT_ENTERPRISE_DISABLE_KMS
logs_transport How to transport logs: syslog-ng, ssh or docker syslog-ng SCT_LOGS_TRANSPORT
collect_logs Collect logs from instances and sct runner N/A SCT_COLLECT_LOGS
execute_post_behavior Run post behavior actions in sct teardown step N/A SCT_EXECUTE_POST_BEHAVIOR
post_behavior_db_nodes Failure/post test behavior, i.e. what to do with the db cloud instances at the end of the test.

'destroy' - Destroy instances and credentials (default)
'keep' - Keep instances running and leave credentials alone
'keep-on-failure' - Keep instances if testrun failed
keep-on-failure SCT_POST_BEHAVIOR_DB_NODES
post_behavior_loader_nodes Failure/post test behavior, i.e. what to do with the loader cloud instances at the end of the test.

'destroy' - Destroy instances and credentials (default)
'keep' - Keep instances running and leave credentials alone
'keep-on-failure' - Keep instances if testrun failed
destroy SCT_POST_BEHAVIOR_LOADER_NODES
post_behavior_monitor_nodes Failure/post test behavior, i.e. what to do with the monitor cloud instances at the end of the test.

'destroy' - Destroy instances and credentials (default)
'keep' - Keep instances running and leave credentials alone
'keep-on-failure' - Keep instances if testrun failed
keep-on-failure SCT_POST_BEHAVIOR_MONITOR_NODES
post_behavior_k8s_cluster Failure/post test behavior, i.e. what to do with the k8s cluster at the end of the test.

'destroy' - Destroy k8s cluster and credentials (default)
'keep' - Keep k8s cluster running and leave credentials alone
'keep-on-failure' - Keep k8s cluster if testrun failed
keep-on-failure SCT_POST_BEHAVIOR_K8S_CLUSTER
internode_compression scylla option: internode_compression N/A SCT_INTERNODE_COMPRESSION
internode_encryption scylla sub option of server_encryption_options: internode_encryption all SCT_INTERNODE_ENCRYPTION
jmx_heap_memory The total size of the memory allocated to JMX. Values in MB, so for 1GB enter 1024(MB) N/A SCT_JMX_HEAP_MEMORY
store_perf_results A flag that indicates whether or not to gather the prometheus stats at the end of the run.
Intended to be used in performance testing
N/A SCT_STORE_PERF_RESULTS
append_scylla_setup_args More arguments to append to scylla_setup command line N/A SCT_APPEND_SCYLLA_SETUP_ARGS
use_preinstalled_scylla Don't install/update ScyllaDB on DB nodes N/A SCT_USE_PREINSTALLED_SCYLLA
force_run_iotune Force running iotune on the DB nodes, regdless if image has predefined values N/A SCT_FORCE_RUN_IOTUNE
stress_cdclog_reader_cmd cdc-stressor command to read cdc_log table.
You can specify everything but the -node , -keyspace, -table, parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
cdc-stressor -stream-query-round-duration 30s SCT_STRESS_CDCLOG_READER_CMD
store_cdclog_reader_stats_in_es Add cdclog reader stats to ES for future performance result calculating N/A SCT_STORE_CDCLOG_READER_STATS_IN_ES
stop_test_on_stress_failure If set to True the test will be stopped immediately when stress command failed.
When set to False the test will continue to run even when there are errors in the
stress process
True SCT_STOP_TEST_ON_STRESS_FAILURE
stress_cdc_log_reader_batching_enable retrieving data from multiple streams in one poll True SCT_STRESS_CDC_LOG_READER_BATCHING_ENABLE
use_legacy_cluster_init Use legacy cluster initialization with autobootsrap disabled and parallel node setup N/A SCT_USE_LEGACY_CLUSTER_INIT
availability_zone Availability zone to use. Specify multiple (comma separated) to deploy resources to multi az (works on AWS).
"Same for multi-region scenario.
N/A SCT_AVAILABILITY_ZONE
aws_fallback_to_next_availability_zone Try all availability zones one by one in order to maximize the chances of getting
the requested instance capacity.
N/A SCT_AWS_FALLBACK_TO_NEXT_AVAILABILITY_ZONE
num_nodes_to_rollback Number of nodes to upgrade and rollback in test_generic_cluster_upgrade N/A SCT_NUM_NODES_TO_ROLLBACK
upgrade_sstables Whether to upgrade sstables as part of upgrade_node or not N/A SCT_UPGRADE_SSTABLES
stress_before_upgrade Stress command to be run before upgrade (preapre stage) N/A SCT_STRESS_BEFORE_UPGRADE
stress_during_entire_upgrade Stress command to be run during the upgrade - user should take care for suitable duration N/A SCT_STRESS_DURING_ENTIRE_UPGRADE
stress_after_cluster_upgrade Stress command to be run after full upgrade - usually used to read the dataset for verification N/A SCT_STRESS_AFTER_CLUSTER_UPGRADE
jepsen_scylla_repo Link to the git repository with Jepsen Scylla tests https://github.com/jepsen-io/scylla.git SCT_JEPSEN_SCYLLA_REPO
jepsen_test_cmd Jepsen test command (e.g., 'test-all') ['test-all -w cas-register --concurrency 10n', 'test-all -w counter --concurrency 10n', 'test-all -w cmap --concurrency 10n', 'test-all -w cset --concurrency 10n', 'test-all -w write-isolation --concurrency 10n', 'test-all -w list-append --concurrency 10n', 'test-all -w wr-register --concurrency 10n'] SCT_JEPSEN_TEST_CMD
jepsen_test_count possible number of reruns of single Jepsen test command 1 SCT_JEPSEN_TEST_COUNT
jepsen_test_run_policy Jepsen test run policy (i.e., what we want to consider as passed for a single test)

'most' - most test runs are passed
'any' - one pass is enough
'all' - all test runs should pass
all SCT_JEPSEN_TEST_RUN_POLICY
max_events_severities Limit severity level for event types N/A SCT_MAX_EVENTS_SEVERITIES
scylla_rsyslog_setup Configure rsyslog on Scylla nodes to send logs to monitoring nodes N/A SCT_SCYLLA_RSYSLOG_SETUP
events_limit_in_email Limit number events in email reports 10 SCT_EVENTS_LIMIT_IN_EMAIL
data_volume_disk_num Number of additional data volumes attached to instances
if data_volume_disk_num > 0, then data volumes (ebs on aws) will be
used for scylla data directory
N/A SCT_DATA_VOLUME_DISK_NUM
data_volume_disk_type Type of addtitional volumes: gp2 | gp3 | io2 | io3 N/A SCT_DATA_VOLUME_DISK_TYPE
data_volume_disk_size Size of additional volume in GB N/A SCT_DATA_VOLUME_DISK_SIZE
data_volume_disk_iops Number of iops for ebs type io2 | io3 | gp3 N/A SCT_DATA_VOLUME_DISK_IOPS
data_volume_disk_throughput Throughput in MiB/sec for ebs type gp3. Min is 125. Max is 1000. N/A SCT_DATA_VOLUME_DISK_THROUGHPUT
run_db_node_benchmarks Flag for running db node benchmarks before the tests N/A SCT_RUN_DB_NODE_BENCHMARKS
nemesis_selector nemesis_selector gets a list of "nemesis properties" and filters IN all the nemesis that has
ALL the properties in that list which are set to true (the intersection of all properties).
(In other words filters out all nemesis that doesn't ONE of these properties set to true)
IMPORTANT: If a property doesn't exist, ALL the nemesis will be included.
N/A SCT_NEMESIS_SELECTOR
nemesis_exclude_disabled nemesis_exclude_disabled determines whether 'disabled' nemeses are filtered out from list
or are allowed to be used. This allows to easily disable too 'risky' or 'extreme' nemeses by default,
for all longevities. For example: it is unwanted to run the ToggleGcModeMonkey in standard longevities
that runs a stress with data validation.
True SCT_NEMESIS_EXCLUDE_DISABLED
nemesis_multiply_factor Multiply the list of nemesis to execute by the specified factor 6 SCT_NEMESIS_MULTIPLY_FACTOR
nemesis_double_load_during_grow_shrink_duration After growing (and before shrink) in GrowShrinkCluster nemesis it will double the load for provided duration. N/A SCT_NEMESIS_DOUBLE_LOAD_DURING_GROW_SHRINK_DURATION
raid_level Number of of raid level: 0 - RAID0, 5 - RAID5 N/A SCT_RAID_LEVEL
bare_loaders Don't install anything but node_exporter to the loaders during cluster setup N/A SCT_BARE_LOADERS
stress_image Dict of the images to use for the stress tools N/A SCT_STRESS_IMAGE
scylla_network_config Configure Scylla networking with single or multiple NIC/IP combinations.
It must be defined for listen_address and rpc_address. For each address mandatory parameters are:
- address: listen_address/rpc_address/broadcast_rpc_address/broadcast_address/test_communication
- ip_type: ipv4 or ipv6
- public: false or true
- nic: number of NIC. 0, 1
Supported for AWS only meanwhile
N/A SCT_SCYLLA_NETWORK_CONFIG
enable_argus Control reporting to argus True SCT_ENABLE_ARGUS
cs_populating_distribution set c-s parameter '-pop' with gauss/uniform distribution for
performance gradual throughtput grow tests
N/A SCT_CS_POPULATING_DISTRIBUTION
latte_schema_parameters Optional. Allows to pass through custom rune script parameters to the 'latte schema' command. N/A SCT_LATTE_SCHEMA_PARAMETERS
num_loaders_step Number of loaders which should be added per step N/A SCT_NUM_LOADERS_STEP
stress_threads_start_num Number of threads for c-s command N/A SCT_STRESS_THREADS_START_NUM
num_threads_step Number of threads which should be added on per step N/A SCT_NUM_THREADS_STEP
stress_step_duration Duration of time for stress round 15m SCT_STRESS_STEP_DURATION
max_deviation Max relative difference between best and current throughput,
if current throughput larger then best on max_rel_diff, it become new best one
N/A SCT_MAX_DEVIATION
n_stress_process Number of stress processes per loader N/A SCT_N_STRESS_PROCESS
stress_process_step add/remove num of process on each round N/A SCT_STRESS_PROCESS_STEP
use_hdr_cs_histogram Enable hdr histogram logging for cs N/A SCT_USE_HDR_CS_HISTOGRAM
stop_on_hw_perf_failure Stop sct performance test if hardware performance test failed

Hardware performance tests runs on each node with sysbench and cassandra-fio tools.
Results stored in ES. HW perf tests run during cluster setups and not affect
SCT Performance tests. Results calculated as average among all results for certain
instance type or among all nodes during single run.
if results for a single node is not in margin 0.01 of
average result for all nodes, hw test considered as Failed.
If stop_on_hw_perf_failure is True, then sct performance test will be terminated
after hw perf tests detect node with hw results not in margin with average
If stop_on_hw_perf_failure is False, then sct performance test will be run
even after hw perf tests detect node with hw results not in margin with average
N/A SCT_STOP_ON_HW_PERF_FAILURE
custom_es_index Use custom ES index for storing test results N/A SCT_CUSTOM_ES_INDEX
simulated_regions Defines how many regions must be simulated on the Scylla config side. If set then
nodes will be provisioned only using the very first real region defined in the configuration.
N/A SCT_SIMULATED_REGIONS
simulated_racks Forces GossipingPropertyFileSnitch (regardless endpoint_snitch) to simulate racks.
Provide number of racks to simulate.
N/A SCT_SIMULATED_RACKS
use_dns_names Use dns names instead of ip addresses for nodes in cluster N/A SCT_USE_DNS_NAMES
validate_large_collections Enable validation for large cells in system table and logs N/A SCT_VALIDATE_LARGE_COLLECTIONS
run_commit_log_check_thread Run commit log check thread if commitlog_use_hard_size_limit is True True SCT_RUN_COMMIT_LOG_CHECK_THREAD
teardown_validators Configuration for additional validations executed after the test {'scrub': {'enabled': False, 'timeout': 1200, 'keyspace': '', 'table': ''}, 'test_error_events': {'enabled': False, 'failing_events': [{'event_class': 'DatabaseLogEvent', 'event_type': 'RUNTIME_ERROR', 'regex': '.runtime_error.'}, {'event_class': 'CoreDumpEvent'}]}} SCT_TEARDOWN_VALIDATORS
use_capacity_reservation reserves instances capacity for whole duration of the test run (AWS only).
Fallbacks to next availabilit zone if capacity is not available
N/A SCT_USE_CAPACITY_RESERVATION
bisect_start_date Scylla build date from which bisecting should start.
Setting this date enables bisection. Format: YYYY-MM-DD
N/A SCT_BISECT_START_DATE
bisect_end_date Scylla build date until which bisecting should run. Format: YYYY-MM-DD N/A SCT_BISECT_END_DATE
kafka_backend Enable validation for large cells in system table and logs N/A SCT_KAFKA_BACKEND
kafka_connectors configuration for setup up kafka connectors N/A SCT_KAFKA_CONNECTORS
run_scylla_doctor Run scylla-doctor in artifact tests N/A SCT_RUN_SCYLLA_DOCTOR
skip_test_stages Skip selected stages of a test scenario N/A SCT_SKIP_TEST_STAGES
use_zero_nodes If True, enable support in sct of zero nodes(configuration, nemesis) N/A SCT_USE_ZERO_NODES
n_db_zero_token_nodes Number of zero token nodes in cluster. Value should be set as "0 1 1"
for multidc configuration in same manner as 'n_db_nodes' and should be equal
number of regions
N/A SCT_N_DB_ZERO_TOKEN_NODES
zero_token_instance_type_db Instance type for zero token node i4i.large SCT_ZERO_TOKEN_INSTANCE_TYPE_DB