From dc094b109682840e0aa85fb60976a2f690a48711 Mon Sep 17 00:00:00 2001 From: kevinlinglesas <36995745+kevinlinglesas@users.noreply.github.com> Date: Mon, 14 Sep 2020 17:22:25 -0400 Subject: [PATCH 01/31] Delete VERSIONS.md The adoption of semver.org guidelines for semantic versioning of this project makes this file obsolete, so it's being removed. --- VERSIONS.md | 43 ------------------------------------------- 1 file changed, 43 deletions(-) delete mode 100644 VERSIONS.md diff --git a/VERSIONS.md b/VERSIONS.md deleted file mode 100644 index 0963600..0000000 --- a/VERSIONS.md +++ /dev/null @@ -1,43 +0,0 @@ -# Versions in SAS Viya ARK - -## Rationale for numbering - -* Over time, SAS Viya ARK will evolve - * possibly, some bugs need to be corrected - * maybe some new features are added - * a new version of SAS Viya is available, and SAS Viya ARK needs to be updated -* For all these reasons, SAS Viya ARK needs its own numbering scheme, distinct, and yet related to the Versions of SAS Viya itself -* Whenever you use SAS Viya ARK, make sure that you know which version you are using -* If you are unsure, check what the version at the top of the [ChangeLog](CHANGELOG.md) is -* If you contact SAS or Open an [issue](https://github.com/sassoftware/viya4-ark/issues), make sure to mention the exact version of SAS Viya ARK you are using. - -## Numbering scheme - -* Example version - * the first "modern" version of SAS Viya ARK was called **Viya401-ark-1.0** - * `-ark--` - * let's decompose that into its consituing parts: - - | Viya401 | - | ark | - | 1 | . | 0 | - |-----------------------|----|---------|----|-------------------|---|----------------------| - | SAS Viya Version |dash| |dash| Major Version |dot| Minor Version | - -* SAS Viya Version is an indicator of the **highest** version of SAS Viya that this has been tested with. -* SAS Viya ARK will not be backwards-compatible across SAS Viya releases. - * In other words, you should aim to use the latest realease of SAS Viya ARK, as long as the ViyaXX portion matches the target software - * It will be impossible to have a version of SAS Viya ARK that is backwards compatible with much older versions of SAS Viya, and attempting it would prevent a healthy development of SAS Viya ARK -* `ark` is used as a starting marker to show that this is the actual version number of SAS Viya ARK, not of SAS Viya -* The Major Version number will be incremented when: - * There are some major changes in SAS Viya itself, such as an Upgrade within a SAS Viya Version. - * There are major changes in SAS Viya ARK itself, such as an important new feature -* The Minor Version number will be incremented when: - * a bug is fixed - * some code is replaced with equivalent code that does not really change the behavior - * small improvements are made, such as adding tags, spacing, etc.. -* When the next release of SAS Viya is available, the digits are "reset": - * we might go from **Viya401-ark-5.2** to **Viya402-ark-1.0** -* Changes to the documentation (in the markdown files) will not necessarily trigger an increment in the version, as the documentation does not impact the way the softare runs. - - - - From 6a960398c35b06aec7ef5bedc27c1781ffef21dd Mon Sep 17 00:00:00 2001 From: Josh Woods Date: Wed, 16 Sep 2020 16:05:34 -0400 Subject: [PATCH 02/31] Update deployment-report to use argparse instead of getopt --- deployment_report/deployment_report.py | 168 +++++------------- .../model/viya_deployment_report.py | 23 ++- .../test/data/expected_command_output.txt | 12 -- .../test/data/expected_usage_output.txt | 27 +++ .../test/test_deployment_report.py | 21 +-- 5 files changed, 96 insertions(+), 155 deletions(-) delete mode 100644 deployment_report/test/data/expected_command_output.txt create mode 100644 deployment_report/test/data/expected_usage_output.txt diff --git a/deployment_report/deployment_report.py b/deployment_report/deployment_report.py index c2de49e..d19033a 100644 --- a/deployment_report/deployment_report.py +++ b/deployment_report/deployment_report.py @@ -9,11 +9,11 @@ # SPDX-License-Identifier: Apache-2.0 ### # ### #################################################################### -import getopt import os import sys -from typing import List, Optional, Text, Tuple +from argparse import ArgumentParser +from typing import List, Text from deployment_report.model.viya_deployment_report import ViyaDeploymentReport @@ -21,39 +21,6 @@ from viya_ark_library.k8s.sas_k8s_errors import KubectlRequestForbiddenError, NamespaceNotFoundError from viya_ark_library.k8s.sas_kubectl import Kubectl -# command line options -_DATA_FILE_OPT_SHORT_ = "d" -_DATA_FILE_OPT_LONG_ = "data-file-only" -_DATA_FILE_OPT_DESC_TMPL_ = " -{}, --{:<30} (Optional) Generate only the report data JSON file." - -_KUBECTL_GLOBAL_OPT_SHORT_ = "k" -_KUBECTL_GLOBAL_OPT_LONG_ = "kubectl-global-opts" -_KUBECTL_GLOBAL_OPT_DESC_TMPL_ = " -{}, --{:<30} (Optional) Any kubectl global options to use with all executions " \ - "(excluding namespace, which should be set using -n, --namespace=)." - -_POD_SNIP_OPT_SHORT_ = "l" -_POD_SNIP_OPT_LONG_ = "include-pod-log-snips" -_POD_SNIP_OPT_DESC_TMPL_ = " -{}, --{:<30} (Optional) Include a 10-line log snippet for each pod container " \ - "(increases command runtime and file size)." - -_NAMESPACE_OPT_SHORT_ = "n" -_NAMESPACE_OPT_LONG_ = "namespace" -_NAMESPACE_OPT_DESC_TMPL_ = " -{}, --{:<30} (Optional) The namespace to target containing SAS software, if not " \ - "defined by KUBECONFIG." - -_OUTPUT_DIR_OPT_SHORT_ = "o" -_OUTPUT_DIR_OPT_LONG_ = "output-dir" -_OUTPUT_DIR_OPT_DESC_TMPL_ = " -{}, --{:<30} (Optional) Existing directory where report files will be written." - -_RESOURCE_DEF_OPT_SHORT_ = "r" -_RESOURCE_DEF_OPT_LONG_ = "include-resource-definitions" -_RESOURCE_DEF_OPT_DESC_TMPL_ = " -{}, --{:<30} (Optional) Include the full JSON resource definition for each " \ - "object in the report (increases file size)." - -_HELP_OPT_SHORT_ = "h" -_HELP_OPT_LONG_ = "help" -_HELP_OPT_DESC_TMPL_ = " -{}, --{:<30} Print usage." - # command line return codes # _SUCCESS_RC_ = 0 _BAD_OPT_RC_ = 1 @@ -103,72 +70,47 @@ def main(argv: List): :param argv: The parameters passed to the script at execution. """ - data_file_only: bool = False - kubectl_global_opts: Text = "" - include_pod_log_snips: bool = False - namespace: Optional[Text] = None - output_dir: Text = "./" - include_resource_definitions: bool = False - - # define the short options for this script - short_opts: Text = (f"{_DATA_FILE_OPT_SHORT_}" - f"{_KUBECTL_GLOBAL_OPT_SHORT_}:" - f"{_POD_SNIP_OPT_SHORT_}" - f"{_NAMESPACE_OPT_SHORT_}:" - f"{_OUTPUT_DIR_OPT_SHORT_}:" - f"{_RESOURCE_DEF_OPT_SHORT_}" - f"{_HELP_OPT_SHORT_}") - - # define the long options for this script - long_opts: List[Text] = [_DATA_FILE_OPT_LONG_, - f"{_KUBECTL_GLOBAL_OPT_LONG_}=", - _POD_SNIP_OPT_LONG_, - f"{_NAMESPACE_OPT_LONG_}=", - f"{_OUTPUT_DIR_OPT_LONG_}=", - _RESOURCE_DEF_OPT_LONG_, - _HELP_OPT_LONG_] - - # get command line options - opts: Tuple = tuple() - try: - opts, args = getopt.getopt(argv, short_opts, long_opts) - except getopt.GetoptError as opt_error: - print() - print(f"ERROR: {opt_error}", file=sys.stderr) - usage(_BAD_OPT_RC_) - - # process opts - for opt, arg in opts: - if opt in (f"-{_DATA_FILE_OPT_SHORT_}", f"--{_DATA_FILE_OPT_LONG_}"): - data_file_only = True - - elif opt in (f"-{_KUBECTL_GLOBAL_OPT_SHORT_}", f"--{_KUBECTL_GLOBAL_OPT_LONG_}"): - kubectl_global_opts = arg - - elif opt in (f"-{_POD_SNIP_OPT_SHORT_}", f"--{_POD_SNIP_OPT_LONG_}"): - include_pod_log_snips = True - - elif opt in (f"-{_NAMESPACE_OPT_SHORT_}", f"--{_NAMESPACE_OPT_LONG_}"): - namespace = arg - - elif opt in (f"-{_OUTPUT_DIR_OPT_SHORT_}", f"--{_OUTPUT_DIR_OPT_LONG_}"): - output_dir = arg - - elif opt in (f"-{_RESOURCE_DEF_OPT_SHORT_}", f"--{_RESOURCE_DEF_OPT_LONG_}"): - include_resource_definitions = True - - elif opt in (f"-{_HELP_OPT_SHORT_}", f"--{_HELP_OPT_LONG_}"): - usage(_SUCCESS_RC_) - - else: - print() - print(f"ERROR: option {opt} not recognized", file=sys.stderr) - usage(_BAD_OPT_RC_) + # configure ArgumentParser + arg_parser: ArgumentParser = ArgumentParser(prog=f"viya-ark.py {ViyaDeploymentReportCommand.command_name()}", + description=ViyaDeploymentReportCommand.command_desc()) + + # add optional arguments + # data-file-only + arg_parser.add_argument( + "-d", "--data-file-only", action="store_true", dest="data_file_only", + help="Generate only the JSON-formatted data.") + # kubectl-global-opts + arg_parser.add_argument( + "-k", "--kubectl-global-opts", type=Text, default="", dest="kubectl_global_opts", + help="Any kubectl global options to use with all executions (excluding namespace, which should be set using " + "-n, --namespace).") + # include-pod-log-snips + arg_parser.add_argument( + "-l", "--include-pod-log-snips", action="store_true", dest="include_pod_log_snips", + help="Include the most recent log lines (up to 10) for each container in the report. This option increases " + "command runtime and file size.") + # namespace + arg_parser.add_argument( + "-n", "--namespace", type=Text, default=None, dest="namespace", + help="Namespace to target containing SAS software, if not defined by KUBECONFIG.") + # output-dir + arg_parser.add_argument( + "-o", "--output-dir", type=Text, default=ViyaDeploymentReport.OUTPUT_DIRECTORY_DEFAULT, dest="output_dir", + help="Directory where log files will be written. Defaults to " + f"\"{ViyaDeploymentReport.OUTPUT_DIRECTORY_DEFAULT}\".") + # include-resource-definitions + arg_parser.add_argument( + "-r", "--include-resource-definitions", action="store_true", dest="include_resource_definitions", + help="Include the full JSON-formatted definition for each resource in the report. This option increases file " + "size.") + + # parse the args passed to this command + args = arg_parser.parse_args(argv) # initialize the kubectl object # this will also verify the connection to the cluster and if the namespace is valid, if provided try: - kubectl: Kubectl = Kubectl(namespace=namespace, global_opts=kubectl_global_opts) + kubectl: Kubectl = Kubectl(namespace=args.namespace, global_opts=args.kubectl_global_opts) except ConnectionError as e: print() print(f"ERROR: {e}", file=sys.stderr) @@ -186,7 +128,7 @@ def main(argv: List): # gather the details for the report try: sas_deployment_report.gather_details(kubectl=kubectl, - include_pod_log_snips=include_pod_log_snips) + include_pod_log_snips=args.include_pod_log_snips) except KubectlRequestForbiddenError as e: print() print(f"ERROR: {e}", file=sys.stderr) @@ -198,37 +140,13 @@ def main(argv: List): print() sys.exit(_RUNTIME_ERROR_RC_) - sas_deployment_report.write_report(output_directory=output_dir, - data_file_only=data_file_only, - include_resource_definitions=include_resource_definitions) + sas_deployment_report.write_report(output_directory=args.output_dir, + data_file_only=args.data_file_only, + include_resource_definitions=args.include_resource_definitions) sys.exit(_SUCCESS_RC_) -################### -# usage() # -################### -def usage(exit_code: int): - """ - Prints the usage information for the deployment-report command and exits the program with the given exit_code. - - :param exit_code: The exit code to return when exiting the program. - """ - print() - print(f"Usage: {ViyaDeploymentReportCommand.command_name()} []") - print() - print("Options:") - print(_DATA_FILE_OPT_DESC_TMPL_.format(_DATA_FILE_OPT_SHORT_, _DATA_FILE_OPT_LONG_)) - print(_KUBECTL_GLOBAL_OPT_DESC_TMPL_.format(_KUBECTL_GLOBAL_OPT_SHORT_, _KUBECTL_GLOBAL_OPT_LONG_ + "=\"\"")) - print(_POD_SNIP_OPT_DESC_TMPL_.format(_POD_SNIP_OPT_SHORT_, _POD_SNIP_OPT_LONG_)) - print(_NAMESPACE_OPT_DESC_TMPL_.format(_NAMESPACE_OPT_SHORT_, _NAMESPACE_OPT_LONG_ + "=\"\"")) - print(_OUTPUT_DIR_OPT_DESC_TMPL_.format(_OUTPUT_DIR_OPT_SHORT_, _OUTPUT_DIR_OPT_LONG_ + "=\"\"")) - print(_RESOURCE_DEF_OPT_DESC_TMPL_.format(_RESOURCE_DEF_OPT_SHORT_, _RESOURCE_DEF_OPT_LONG_)) - print(_HELP_OPT_DESC_TMPL_.format(_HELP_OPT_SHORT_, _HELP_OPT_LONG_)) - print() - sys.exit(exit_code) - - #################### # __main__ # #################### diff --git a/deployment_report/model/viya_deployment_report.py b/deployment_report/model/viya_deployment_report.py index 93b6b86..4157a2a 100644 --- a/deployment_report/model/viya_deployment_report.py +++ b/deployment_report/model/viya_deployment_report.py @@ -39,6 +39,12 @@ # SAS custom API resource group id # _SAS_API_GROUP_ID_ = "sas.com" +# default values for arguments shared between the model and command # +DATA_FILE_ONLY_DEFAULT: bool = False +INCLUDE_POD_LOG_SNIPS_DEFAULT: bool = False +INCLUDE_RESOURCE_DEFINITIONS_DEFAULT: bool = False +OUTPUT_DIRECTORY_DEFAULT: Text = "./" + class ViyaDeploymentReport(object): """ @@ -49,13 +55,21 @@ class ViyaDeploymentReport(object): The gathered data can be written to disk as an HTML report and a JSON file containing the gathered data. """ + + # default values for arguments shared between the model and command # + DATA_FILE_ONLY_DEFAULT: bool = False + INCLUDE_POD_LOG_SNIPS_DEFAULT: bool = False + INCLUDE_RESOURCE_DEFINITIONS_DEFAULT: bool = False + OUTPUT_DIRECTORY_DEFAULT: Text = "./" + def __init__(self) -> None: """ Constructor for ViyaDeploymentReport object. """ self._report_data = None - def gather_details(self, kubectl: KubectlInterface, include_pod_log_snips: bool = False) -> None: + def gather_details(self, kubectl: KubectlInterface, + include_pod_log_snips: bool = INCLUDE_POD_LOG_SNIPS_DEFAULT) -> None: """ This method executes the gathering of Kubernetes resources related to SAS components. Before this method is executed class fields will have a None value. This method will @@ -496,9 +510,10 @@ def get_sas_component_resources(self, component_name: Text, resource_kind: Text) except KeyError: return None - def write_report(self, output_directory: Text = "", data_file_only: bool = False, - include_resource_definitions: bool = False, file_timestamp: Optional[Text] = None) \ - -> Tuple[Optional[AnyStr], Optional[AnyStr]]: + def write_report(self, output_directory: Text = OUTPUT_DIRECTORY_DEFAULT, + data_file_only: bool = DATA_FILE_ONLY_DEFAULT, + include_resource_definitions: bool = INCLUDE_RESOURCE_DEFINITIONS_DEFAULT, + file_timestamp: Optional[Text] = None) -> Tuple[Optional[AnyStr], Optional[AnyStr]]: """ Writes the report data to a file as a JSON string. diff --git a/deployment_report/test/data/expected_command_output.txt b/deployment_report/test/data/expected_command_output.txt deleted file mode 100644 index 44d2ead..0000000 --- a/deployment_report/test/data/expected_command_output.txt +++ /dev/null @@ -1,12 +0,0 @@ - -Usage: deployment-report [] - -Options: - -d, --data-file-only (Optional) Generate only the report data JSON file. - -k, --kubectl-global-opts="" (Optional) Any kubectl global options to use with all executions (excluding namespace, which should be set using -n, --namespace=). - -l, --include-pod-log-snips (Optional) Include a 10-line log snippet for each pod container (increases command runtime and file size). - -n, --namespace="" (Optional) The namespace to target containing SAS software, if not defined by KUBECONFIG. - -o, --output-dir="" (Optional) Existing directory where report files will be written. - -r, --include-resource-definitions (Optional) Include the full JSON resource definition for each object in the report (increases file size). - -h, --help Print usage. - diff --git a/deployment_report/test/data/expected_usage_output.txt b/deployment_report/test/data/expected_usage_output.txt new file mode 100644 index 0000000..df2ff61 --- /dev/null +++ b/deployment_report/test/data/expected_usage_output.txt @@ -0,0 +1,27 @@ +usage: viya-ark.py deployment-report [-h] [-d] [-k KUBECTL_GLOBAL_OPTS] [-l] + [-n NAMESPACE] [-o OUTPUT_DIR] [-r] + +Generate a deployment report of SAS components for a target Kubernetes +environment. + +optional arguments: + -h, --help show this help message and exit + -d, --data-file-only Generate only the JSON-formatted data. + -k KUBECTL_GLOBAL_OPTS, --kubectl-global-opts KUBECTL_GLOBAL_OPTS + Any kubectl global options to use with all executions + (excluding namespace, which should be set using -n, + --namespace). + -l, --include-pod-log-snips + Include the most recent log lines (up to 10) for each + container in the report. This option increases command + runtime and file size. + -n NAMESPACE, --namespace NAMESPACE + Namespace to target containing SAS software, if not + defined by KUBECONFIG. + -o OUTPUT_DIR, --output-dir OUTPUT_DIR + Directory where log files will be written. Defaults to + "./". + -r, --include-resource-definitions + Include the full JSON-formatted definition for each + resource in the report. This option increases file + size. diff --git a/deployment_report/test/test_deployment_report.py b/deployment_report/test/test_deployment_report.py index 5f2be4c..195e41d 100644 --- a/deployment_report/test/test_deployment_report.py +++ b/deployment_report/test/test_deployment_report.py @@ -13,7 +13,7 @@ import os import pytest -from deployment_report.deployment_report import ViyaDeploymentReportCommand, usage +from deployment_report.deployment_report import ViyaDeploymentReportCommand, main #################################################################### # There is not unit test defined for: @@ -48,27 +48,20 @@ def test_viya_deployment_report_command_command_desc(): def test_usage(capfd): - # test that a SystemExit is raised - with pytest.raises(SystemExit) as sys_exit: - usage(0) + _argv: list = ["-h"] - # make sure the exit value is correct - assert sys_exit.value.code == 0 + # run main + with pytest.raises(SystemExit): + main(_argv) # define expected output current_dir = os.path.dirname(os.path.abspath(__file__)) - test_data_file = os.path.join(current_dir, f"data{os.sep}expected_command_output.txt") + test_data_file = os.path.join(current_dir, f"data{os.sep}expected_usage_output.txt") with open(test_data_file) as f: expected = f.read() - # get the captured output + # get output out, err = capfd.readouterr() - # assert that the captured output matches the expected assert out == expected - - # make sure that a non-zero exit code is correct - with pytest.raises(SystemExit) as sys_exit: - usage(5) - assert sys_exit.value.code == 5 From f037c19dfde6e432249c715b327a40861e9c9df7 Mon Sep 17 00:00:00 2001 From: Josh Woods Date: Wed, 16 Sep 2020 16:08:20 -0400 Subject: [PATCH 03/31] Remove redundant var definitions --- deployment_report/model/viya_deployment_report.py | 6 ------ 1 file changed, 6 deletions(-) diff --git a/deployment_report/model/viya_deployment_report.py b/deployment_report/model/viya_deployment_report.py index 4157a2a..cf315cb 100644 --- a/deployment_report/model/viya_deployment_report.py +++ b/deployment_report/model/viya_deployment_report.py @@ -39,12 +39,6 @@ # SAS custom API resource group id # _SAS_API_GROUP_ID_ = "sas.com" -# default values for arguments shared between the model and command # -DATA_FILE_ONLY_DEFAULT: bool = False -INCLUDE_POD_LOG_SNIPS_DEFAULT: bool = False -INCLUDE_RESOURCE_DEFINITIONS_DEFAULT: bool = False -OUTPUT_DIRECTORY_DEFAULT: Text = "./" - class ViyaDeploymentReport(object): """ From 9321cddb0b6e4044f1786de9d1b7a566316a739c Mon Sep 17 00:00:00 2001 From: kevinlinglesas <36995745+kevinlinglesas@users.noreply.github.com> Date: Thu, 24 Sep 2020 11:18:08 -0400 Subject: [PATCH 04/31] Update VIYA_KUBELET_VERSION_MIN to 1.18 #23 --- pre_install_report/viya_cluster_settings.properties | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pre_install_report/viya_cluster_settings.properties b/pre_install_report/viya_cluster_settings.properties index c5c8333..169a95a 100644 --- a/pre_install_report/viya_cluster_settings.properties +++ b/pre_install_report/viya_cluster_settings.properties @@ -27,4 +27,4 @@ VIYA_MIN_AGGREGATE_WORKER_MEMORY=56G # Minimum allowed value = '.001'. Recommended = 12. VIYA_MIN_AGGREGATE_WORKER_CPU_CORES=12 # Supported versions of Kubelet Version -VIYA_KUBELET_VERSION_MIN=v1.14.0 +VIYA_KUBELET_VERSION_MIN=v1.18.0 From 64c5f5a80657e099ebe8a084cf78c0dad942986b Mon Sep 17 00:00:00 2001 From: Josh Woods Date: Thu, 24 Sep 2020 14:07:30 -0400 Subject: [PATCH 05/31] Adding use of LRPIndicator to deployment-report for visual consistency --- deployment_report/deployment_report.py | 19 ++++++++++++++----- .../model/viya_deployment_report.py | 8 -------- 2 files changed, 14 insertions(+), 13 deletions(-) diff --git a/deployment_report/deployment_report.py b/deployment_report/deployment_report.py index d19033a..65ec483 100644 --- a/deployment_report/deployment_report.py +++ b/deployment_report/deployment_report.py @@ -18,6 +18,7 @@ from deployment_report.model.viya_deployment_report import ViyaDeploymentReport from viya_ark_library.command import Command +from viya_ark_library.lrp_indicator import LRPIndicator from viya_ark_library.k8s.sas_k8s_errors import KubectlRequestForbiddenError, NamespaceNotFoundError from viya_ark_library.k8s.sas_kubectl import Kubectl @@ -127,8 +128,10 @@ def main(argv: List): # gather the details for the report try: - sas_deployment_report.gather_details(kubectl=kubectl, - include_pod_log_snips=args.include_pod_log_snips) + print() + with LRPIndicator(enter_message="Generating deployment report"): + sas_deployment_report.gather_details(kubectl=kubectl, + include_pod_log_snips=args.include_pod_log_snips) except KubectlRequestForbiddenError as e: print() print(f"ERROR: {e}", file=sys.stderr) @@ -140,9 +143,15 @@ def main(argv: List): print() sys.exit(_RUNTIME_ERROR_RC_) - sas_deployment_report.write_report(output_directory=args.output_dir, - data_file_only=args.data_file_only, - include_resource_definitions=args.include_resource_definitions) + data_file, html_file = sas_deployment_report.write_report( + output_directory=args.output_dir, + data_file_only=args.data_file_only, + include_resource_definitions=args.include_resource_definitions) + + print(f"\nCreated: {data_file}") + if not args.data_file_only: + print(f"Created: {html_file}") + print() sys.exit(_SUCCESS_RC_) diff --git a/deployment_report/model/viya_deployment_report.py b/deployment_report/model/viya_deployment_report.py index cf315cb..022eaed 100644 --- a/deployment_report/model/viya_deployment_report.py +++ b/deployment_report/model/viya_deployment_report.py @@ -531,14 +531,10 @@ def write_report(self, output_directory: Text = OUTPUT_DIRECTORY_DEFAULT, # convert the data to a JSON string # data_json = json.dumps(self._report_data, cls=KubernetesObjectJSONEncoder, indent=4, sort_keys=True) - # blank line for output readability # - print() - # write the report data # data_file_path: Text = output_directory + _REPORT_DATA_FILE_NAME_TMPL_.format(file_timestamp) with open(data_file_path, "w+") as data_file: data_file.write(data_json) - print("Created: {}".format(os.path.abspath(data_file_path))) # write the html file, if requested # html_file_path: Optional[Text] = None @@ -550,9 +546,5 @@ def write_report(self, output_directory: Text = OUTPUT_DIRECTORY_DEFAULT, trim_blocks=True, lstrip_blocks=True, report_data=json.loads(data_json), include_definitions=include_resource_definitions) - print("Created: {}".format(html_file_path)) - - # blank line for output readability # - print() return os.path.abspath(data_file_path), html_file_path From 59e93bb9635f1ea1e3d6ea2b841264587892de89 Mon Sep 17 00:00:00 2001 From: Latha Sivakumar Date: Fri, 2 Oct 2020 11:06:16 -0400 Subject: [PATCH 06/31] (lasivasas-issue-26) Commit files Check Permissions for azure Persistent Volume Claims --- .../library/pre_install_check.py | 129 ++++++-------- .../library/pre_install_check_permissions.py | 166 ++++++++++++++++++ .../library/pre_install_utils.py | 62 ++++++- .../library/utils/pvc_azure_file.template | 11 ++ .../utils/pvc_azure_file_premium.template | 11 ++ .../utils/pvc_azure_managed_premium.template | 11 ++ pre_install_report/library/utils/pvc_nfs.yaml | 41 +++++ .../library/utils/viya_constants.py | 3 + pre_install_report/pre_install_report.py | 15 +- .../test_data/json_data/azure_file_pvc.json | 21 +++ .../json_data/azure_sc_detailed.json | 111 ++++++++++++ .../json_data/azure_storage_class.json | 158 +++++++++++++++++ viya_ark_library/k8s/sas_k8s_objects.py | 23 +++ 13 files changed, 675 insertions(+), 87 deletions(-) create mode 100644 pre_install_report/library/utils/pvc_azure_file.template create mode 100644 pre_install_report/library/utils/pvc_azure_file_premium.template create mode 100644 pre_install_report/library/utils/pvc_azure_managed_premium.template create mode 100644 pre_install_report/library/utils/pvc_nfs.yaml create mode 100644 pre_install_report/test/test_data/json_data/azure_file_pvc.json create mode 100644 pre_install_report/test/test_data/json_data/azure_sc_detailed.json create mode 100644 pre_install_report/test/test_data/json_data/azure_storage_class.json diff --git a/pre_install_report/library/pre_install_check.py b/pre_install_report/library/pre_install_check.py index 8f03cf6..ba0913d 100644 --- a/pre_install_report/library/pre_install_check.py +++ b/pre_install_report/library/pre_install_check.py @@ -104,33 +104,43 @@ def check_details(self, kubectl, ingress_port, ingress_host, ingress_controller, ureg = UnitRegistry(datafile) quantity_ = ureg.Quantity + + pre_check_utils_params = {} + pre_check_utils_params[viya_constants.KUBECTL] = self._kubectl + pre_check_utils_params["logger"] = self.sas_logger + utils = PreCheckUtils(pre_check_utils_params) + configs_data = self.get_config_info() cluster_info = self._get_master_json() master_data = self._check_master(cluster_info) namespace_data = [] - namespace_data = self._check_available_namespaces(self._get_namespaces_json(), namespace_data) + namespace_data = self._check_available_namespaces(self._get_json("namespaces"), namespace_data) - storage_json = self._get_storage_json() + storage_json = self._get_json("storageclass") storage_data = self._get_storage_classes(storage_json) - nodes_json = self._get_nodes_json() + nodes_json = self._get_json("nodes") nodes_data = self.get_nested_nodes_info(nodes_json, quantity_) global_data = [] global_data = self.evaluate_nodes(nodes_data, global_data, cluster_info, quantity_) - cluster_admin_permission_data, namespace_admin_permission_data, \ - ingress_data, ns_admin_permission_aggregate, \ - cluster_admin_permission_aggregate = self._check_permissions(ingress_controller, str(ingress_host), - str(ingress_port)) - - self.generate_report(global_data, master_data, configs_data, - storage_data, - namespace_data, cluster_admin_permission_data, - namespace_admin_permission_data, - ingress_data, - ns_admin_permission_aggregate, - cluster_admin_permission_aggregate, + params = {} + params[viya_constants.INGRESS_CONTROLLER] = ingress_controller + params[viya_constants.INGRESS_HOST] = str(ingress_host) + params[viya_constants.INGRESS_PORT] = str(ingress_port) + params[viya_constants.PERM_CLASS] = utils + params['logger'] = self.sas_logger + + permissions_check = PreCheckPermissions(params) + self._check_permissions(permissions_check) + + self.generate_report(global_data, master_data, configs_data, storage_data, namespace_data, + permissions_check.get_cluster_admin_permission_data(), + permissions_check.get_namespace_admin_permission_data(), + permissions_check.get_ingress_data(), + permissions_check.get_namespace_admin_permission_aggregate(), + permissions_check.get_cluster_admin_permission_aggregate(), output_dir) return @@ -290,34 +300,15 @@ def _get_config_users(self, config_json, configs_data): configs_data.append(cluster_data) return configs_data - def _get_nodes_json(self): + def _get_json(self, k8sresource): """ - Retrieve the nodes information from the Kubernetes cluster in json format. + Retrieve the k8s resource information from the Kubernetes cluster in json format. return: nodes information in json format """ - nodes_json = self._get_raw_json("nodes") - - return nodes_json - - def _get_namespaces_json(self): - """ - Retrieve the namespaces information from Kubernetes in json format. - - return: namespaces information in json format - """ - namespace_json = self._get_raw_json("namespaces") - - return namespace_json - - def _get_storage_json(self): - """ - Retrieve the storage class information from Kubernetes in json format + resource_json = self._get_raw_json(k8sresource) - return: storage class information in json format - """ - storage_json = self._get_raw_json("storageclass") - return storage_json + return resource_json def _get_storage_classes(self, storage_json): """ @@ -333,6 +324,8 @@ def _get_storage_classes(self, storage_json): for node in storage_json['items']: node_data = {} node_data['storageClassNameName'] = node['metadata']['name'] + node_data['provisioner'] = node['provisioner'] + node_data['selfLink'] = node['metadata']['selfLink'] try: annotated_lines = node['metadata']['annotations'] @@ -398,48 +391,36 @@ def _delete_temp_file(self, file_name): if os.path.exists(file_name): os.remove(file_name) - def _check_permissions(self, ingress_controller, ingress_host, ingress_port): + def _check_permissions(self, permissions_check: PreCheckPermissions): """ Check if permissions are adequate to complete Viya deployment with cluster admin and namespace admin lveles of access to cluster resources - ingress_controller: nginx/istio or other supported controller - ingress_host: user specified host for ingress controller - ingress_port: user specified port for ingress controller - return: dictionary objects with the results of permissions checking + permissions_check: instance of PreCheckPermissions class """ - pre_check_utils_params = {} - pre_check_utils_params[viya_constants.KUBECTL] = self._kubectl - pre_check_utils_params["logger"] = self.sas_logger - utils = PreCheckUtils(pre_check_utils_params) - - params = {} - params[viya_constants.INGRESS_CONTROLLER] = ingress_controller - params[viya_constants.INGRESS_HOST] = ingress_host - params[viya_constants.INGRESS_PORT] = ingress_port - params[viya_constants.PERM_CLASS] = utils - params['logger'] = self.sas_logger - - # initialize the PreCheckPermissions object - perms = PreCheckPermissions(params) namespace = self._kubectl.get_namespace() - - perms.check_sample_application() - perms.check_sample_ingress() - perms.check_deploy_crd() - perms.check_rbac_role() - perms.check_create_custom_resource() - perms.check_get_custom_resource(namespace, ) - perms.check_delete_custom_resource() - perms.check_rbac_delete_role() - perms.check_sample_response() - perms.check_delete_crd() - perms.check_delete_sample_application() - perms.check_delete_sample_ingress() - - return perms.get_cluster_admin_permission_data(), perms.get_namespace_admin_permission_data(),\ - perms.get_ingress_data(), perms.get_namespace_admin_permission_aggregate(), \ - perms.get_cluster_admin_permission_aggregate() + permissions_check.get_sc_resources() + permissions_check.manage_pvc(viya_constants.KUBECTL_APPLY, False) + + permissions_check.check_sample_application() + permissions_check.check_sample_ingress() + permissions_check.check_deploy_crd() + permissions_check.check_rbac_role() + permissions_check.check_create_custom_resource() + permissions_check.check_get_custom_resource(namespace) + + permissions_check.check_delete_custom_resource() + permissions_check.check_rbac_delete_role() + + permissions_check.check_sample_response() + + permissions_check.check_delete_crd() + permissions_check.check_delete_sample_application() + permissions_check.check_delete_sample_ingress() + # Check the status of deployed PVCs + permissions_check.manage_pvc(viya_constants.KUBECTL_APPLY, True) + # Delete all Deployed PVCs + permissions_check.manage_pvc(viya_constants.KUBECTL_DELETE, False) def _escape_ansi(self, line): """ diff --git a/pre_install_report/library/pre_install_check_permissions.py b/pre_install_report/library/pre_install_check_permissions.py index b6558ac..85d8a8d 100644 --- a/pre_install_report/library/pre_install_check_permissions.py +++ b/pre_install_report/library/pre_install_check_permissions.py @@ -10,12 +10,28 @@ # SPDX-License-Identifier: Apache-2.0 ### # ### #################################################################### +import os +from typing import List import requests +import pprint from requests.packages.urllib3.exceptions import InsecureRequestWarning from pre_install_report.library.utils import viya_constants from pre_install_report.library.pre_install_utils import PreCheckUtils from viya_ark_library.logging import ViyaARKLogger +from viya_ark_library.k8s.sas_k8s_objects import KubernetesResource + +PVC_AZURE_FILE = "pvc_azure_file.yaml" +PVC_AZURE_FILE_PREMIUM = "pvc_azure_file_premium.yaml" +PVC_AZURE_MANAGED_PREMIUM = "pvc_azure_managed_premium.yaml" + +PVC_AZURE_FILE_NAME = "pvc-azurefile" +PVC_AZURE_FILE_PREMIUM_NAME = "pvc-azurefile-premium" +PVC_AZURE_MANAGED_PREMIUM_NAME = "pvc-azure-managed-premium" +SC_TYPE_STANDARD_LRS = "Standard_LRS" +SC_TYPE_PREMIUM_LRS = "Premium_LRS" +PROVISIONER_AZURE_FILE = "kubernetes.io/azure-file" +PROVISIONER_AZURE_DISK = "kubernetes.io/azure-disk" class PreCheckPermissions(object): @@ -50,6 +66,7 @@ def __init__(self, params): self.ingress_file = "hello-ingress.yaml" if self.ingress_controller == viya_constants.INGRESS_ISTIO: self.ingress_file = "helloworld-gateway.yaml" + self._storage_class_sc: List[KubernetesResource] = None def _set_results_cluster_admin(self, resource_key, rc): """ @@ -74,6 +91,155 @@ def _set_results_namespace_admin(self, resource_key, rc): else: self.namespace_admin_permission_data[resource_key] = viya_constants.ADEQUATE_PERMS + def _get_pvc(self, pvc_name, key): + """ + Get the pvc resource from the Kubernetes using kubectl, if it is deployed successfully + + return KubenestesResource object + """ + k8s_resource: KubernetesResource = self.utils.get_resource("pvc", pvc_name) + if k8s_resource is not None: + self.check_pvc_phase(k8s_resource, pvc_name, key) + else: + self._set_results_namespace_admin(key, 1) + + def check_pvc_phase(self, pvc_azurefile, pvc_name, key): + """ + Get status of deployed pvc yaml + + action: issue kubectl command to get status Bound / Pending / error + + """ + if pvc_azurefile: + self.logger.info("pvc_azurefile {}".format(pprint.pformat(pvc_azurefile.as_dict()))) + if (pvc_azurefile.get_status_value("phase") == "Bound"): + self._set_results_namespace_admin(key, 0) + self.logger.info("{} status {}".format(pvc_name, pvc_azurefile.get_status_value("phase"))) + else: + self._set_results_namespace_admin(key, 1) + self.logger.info("{} status {}".format(pvc_name, pvc_azurefile.get_status_value("phase"))) + self.utils.do_cmd(" describe pvc " + pvc_name) + else: + self._set_results_namespace_admin(key, 1) + + def _replace_sc_name_infile(self, template_file, sc_name, outfile): + + try: + os.remove(self.utils._get_filepath(outfile)) + except OSError as err: + self.logger.error("Unable to delete file: {}".format(err)) + pass + try: + with open(self.utils._get_filepath(template_file), 'r') as infile, \ + open(self.utils._get_filepath(outfile), 'w') as outfile: + content = infile.read() + infile.seek(0) + outfile.write(content.replace('$sc_name$', sc_name)) + outfile.close() + except IOError as err: + self.logger.error("IOError templatefile {} outfile {} error {} ".format(template_file, outfile, str(err))) + pass + + def _delete_pvc_yaml_file(self, outfile): + + try: + os.remove(self.utils._get_filepath(outfile)) + except OSError as err: + self.logger.error("Unable to delete file: {}".format(err)) + pass + + def _manage_specific_pvc_type(self, action, check, pvc_template_file, sc_name, pvc_yaml_file, pvc_name, key): + if check: + if action == viya_constants.KUBECTL_APPLY and check: + self._get_pvc(pvc_name, key) + else: + if action == viya_constants.KUBECTL_APPLY: + self._replace_sc_name_infile(pvc_template_file, sc_name, pvc_yaml_file) + self.utils.deploy_manifest_file(action, pvc_yaml_file) + if action == viya_constants.KUBECTL_DELETE: + rc1 = self.utils.deploy_manifest_file(action, pvc_yaml_file) + self._set_results_namespace_admin(viya_constants.PERM_DELETE + key, rc1) + self._delete_pvc_yaml_file(pvc_yaml_file) + + def manage_pvc(self, action, check): + """ + Apply or delete the pvc + + action: Apply or Delete pvc and set the report status + + """ + storage_class_names = self.get_storage_classes_details() + if storage_class_names: + for value in storage_class_names: + # check for 'kubernetes.io/azure-file', 'azurefile-disk' + self.logger.info("Storage class: {}".format(value)) + if value[1] == PVC_AZURE_FILE: + self._manage_specific_pvc_type(action, check, "pvc_azure_file.template", value[0], + PVC_AZURE_FILE, PVC_AZURE_FILE_NAME, viya_constants.PERM_AZ_FILE) + elif value[1] == PVC_AZURE_FILE_PREMIUM: + self._manage_specific_pvc_type(action, check, "pvc_azure_file_premium.template", value[0], + PVC_AZURE_FILE_PREMIUM, PVC_AZURE_FILE_PREMIUM_NAME, + viya_constants.PERM_AZ_FILE_PR) + elif value[1] == PVC_AZURE_MANAGED_PREMIUM: + self._manage_specific_pvc_type(action, check, "pvc_azure_managed_premium.template", value[0], + PVC_AZURE_MANAGED_PREMIUM, PVC_AZURE_MANAGED_PREMIUM_NAME, + viya_constants.PERM_AZ_DISK) + else: + self.logger.error("Storage class not attempted {}".format(value)) + + self.logger.debug("Namespaced results {}".format(pprint.pformat(self.namespace_admin_permission_data))) + + def get_sc_resources(self): + """ + Uses viyaARK_library common library to retrieve kubernetes resources kind=storage class + return k8s_resource: List of Kubernetes resources + """ + self._storage_class_sc = self.utils.get_resources(KubernetesResource.Kinds.STORAGECLASS) + + def get_storage_classes_details(self): + """ + + """ + k8s_resources = self._storage_class_sc + + storage_classes = [] + if self._storage_class_sc is None: + return storage_classes + for k8s_resource in k8s_resources: + self.logger.debug("As Dict {}".format(pprint.pformat(k8s_resource.as_dict()))) + self.logger.debug("name {} provisioner{} storageaccounttype {} selfLink {} skuName {}". + format(str(k8s_resource.get_name()), + str(k8s_resource.get_provisioner()), + str(k8s_resource.get_parameter_value('storageaccounttype')), + str(k8s_resource.get_self_link()), + str(k8s_resource.get_parameter_value('skuName')))) + # print("selfLink" + str(k8s_resource.get_self_link())) + # if ("/storageclasses/managed-premium" in str(k8s_resource.get_self_link())): + # print("True //storageclasses//managed-premium") + # if ("/storageclasses/azurefile" in str(k8s_resource.get_self_link())): + # print("True storageclasses//managed-premium ") + if (str(k8s_resource.get_provisioner()) == PROVISIONER_AZURE_FILE) and \ + ("/storageclasses/azurefile" in str(k8s_resource.get_self_link())): + if str(k8s_resource.get_parameter_value('skuName')) == SC_TYPE_STANDARD_LRS: + storage_classes.append((str(k8s_resource.get_name()), + PVC_AZURE_FILE, + str(k8s_resource.get_provisioner()), + str(k8s_resource.get_parameter_value('skuName')))) + elif str(k8s_resource.get_parameter_value('skuName')) == SC_TYPE_PREMIUM_LRS: + storage_classes.append((str(k8s_resource.get_name()), + PVC_AZURE_FILE_PREMIUM, + str(k8s_resource.get_provisioner()), + str(k8s_resource.get_parameter_value('skuName')))) + if str(k8s_resource.get_provisioner()) == PROVISIONER_AZURE_DISK and \ + ("/storageclasses/managed-premium" in str(k8s_resource.get_self_link())) and \ + str(k8s_resource.get_parameter_value('storageaccounttype')) == SC_TYPE_PREMIUM_LRS: + storage_classes.append((str(k8s_resource.get_name()), + PVC_AZURE_MANAGED_PREMIUM, + str(k8s_resource.get_provisioner()), + str(k8s_resource.get_parameter_value('storageaccounttype')))) + self.logger.debug("Provisioner {} ".format(pprint.pformat(storage_classes))) + return storage_classes + def _set_results_namespace_admin_crd(self, resource_key, rc): """ Set permissions status creation of CRD as namespace admin role diff --git a/pre_install_report/library/pre_install_utils.py b/pre_install_report/library/pre_install_utils.py index 177cfc3..4b0fe86 100644 --- a/pre_install_report/library/pre_install_utils.py +++ b/pre_install_report/library/pre_install_utils.py @@ -13,9 +13,12 @@ from subprocess import CalledProcessError import os +import pprint +from typing import List from pre_install_report.library.utils import viya_constants from viya_ark_library.k8s.sas_kubectl_interface import KubectlInterface, KubernetesApiResources +from viya_ark_library.k8s.sas_k8s_objects import KubernetesResource from viya_ark_library.logging import ViyaARKLogger @@ -53,9 +56,9 @@ def deploy_manifest_file(self, action, file_name): error_msg = str(cpe.output) self.logger.error("deploy_manifest_file rc {} action {} filepath {} error_msg {}".format(str(rc), action, file_path, error_msg)) - return rc + return 1 - self.logger.info("deploy_manifest_file {} rc {} action {} filepath {} data{}".format(rc, str(rc), action, + self.logger.info("deploy_manifest_file rc {} action {} filepath {} data{}".format(str(rc), action, file_path, str(data))) return rc @@ -67,13 +70,13 @@ def do_cmd(self, test_cmd): wait --for=delete pod -l app=hello-world-pod cmd: kubectl command to be executed - return: kubectl rc, output + return: kubectl rc """ try: data = self._kubectl.do(test_cmd, False) - self.logger.info("do_cmd " + ' rc = 0' + test_cmd + - ' data = ' + str(data)) + self.logger.info("cmd {} rc = 0".format(test_cmd)) + self.logger.debug("cmd {} rc = 0 response {}".format(test_cmd, str(data))) return 0 except CalledProcessError as e: data = e.output @@ -122,6 +125,52 @@ def can_i(self, test_cmd): cpe.returncode + " " + test_cmd) return False + def get_resources(self, resource_kind): + """ + Retrieve specific kubernetes resource kind + + return k8s_resource: List of Kubernetes resources + """ + k8s_resources = [] + return_code = 0 + try: + k8s_resources: List[KubernetesResource] = self._kubectl.get_resources(resource_kind, False) + # my_list = [] + # for node in k8s_resources: + # my_list.append(node.as_dict()) + # pprint.pprint(my_list) + + except CalledProcessError as cpe: + return_code = str(cpe.returncode) + self.logger.exception("resource kind {} return code {}".format(str(resource_kind), str(return_code))) + return k8s_resources + + self.logger.debug(" KubernetesResources {}".format(str(resource_kind))) + return k8s_resources + + def get_resource(self, resource_kind, resource_name): + """ + Retrieve specific kubernetes resource by name in json format + + k8s_resource: Kubernetes resource information to retrieve + return: information as raw json + """ + k8s_resource = None + return_code = 0 + try: + + k8s_resource = self._kubectl.get_resource(resource_kind, resource_name, False) + except CalledProcessError as cpe: + return_code = str(cpe.returncode) + self.logger.exception("resource {} {} return code {}".format(str(resource_kind), + str(resource_name), str(return_code))) + return k8s_resource + + self.logger.debug("resource {} {} KubernetesResource {}".format(str(resource_kind), + str(resource_name), + pprint.pformat(k8s_resource.as_dict()))) + return k8s_resource + def _get_filepath(self, file_name): """ Assemble and return path for specied file in project library @@ -131,6 +180,5 @@ def _get_filepath(self, file_name): """ current_dir = os.path.dirname(os.path.abspath(__file__)) file_path = os.path.join(current_dir, "utils" + os.sep + file_name) - # current_dir = os.path.dirname(os.path.abspath(__file__)) - # file_path = os.path.join(current_dir, file_name) + return file_path diff --git a/pre_install_report/library/utils/pvc_azure_file.template b/pre_install_report/library/utils/pvc_azure_file.template new file mode 100644 index 0000000..98c7672 --- /dev/null +++ b/pre_install_report/library/utils/pvc_azure_file.template @@ -0,0 +1,11 @@ +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: pvc-azurefile +spec: + accessModes: + - ReadWriteMany + storageClassName: $sc_name$ + resources: + requests: + storage: 5000Gi diff --git a/pre_install_report/library/utils/pvc_azure_file_premium.template b/pre_install_report/library/utils/pvc_azure_file_premium.template new file mode 100644 index 0000000..be2a860 --- /dev/null +++ b/pre_install_report/library/utils/pvc_azure_file_premium.template @@ -0,0 +1,11 @@ +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: pvc-azurefile-premium +spec: + accessModes: + - ReadWriteMany + storageClassName: $sc_name$ + resources: + requests: + storage: 150Gi diff --git a/pre_install_report/library/utils/pvc_azure_managed_premium.template b/pre_install_report/library/utils/pvc_azure_managed_premium.template new file mode 100644 index 0000000..f253940 --- /dev/null +++ b/pre_install_report/library/utils/pvc_azure_managed_premium.template @@ -0,0 +1,11 @@ +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: pvc-azure-managed-premium +spec: + accessModes: + - ReadWriteOnce + storageClassName: $sc_name$ + resources: + requests: + storage: 5Gi diff --git a/pre_install_report/library/utils/pvc_nfs.yaml b/pre_install_report/library/utils/pvc_nfs.yaml new file mode 100644 index 0000000..9ede709 --- /dev/null +++ b/pre_install_report/library/utils/pvc_nfs.yaml @@ -0,0 +1,41 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app: web + name: web +spec: + replicas: 1 + selector: + matchLabels: + app: web + strategy: {} + template: + metadata: + creationTimestamp: null + labels: + app: web + spec: + containers: + - image: nginx:latest + name: nginx + resources: {} + volumeMounts: + - mountPath: /data + name: data + volumes: + - name: data + persistentVolumeClaim: + claimName: nfs-data +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: nfs-data +spec: + accessModes: + - ReadWriteMany + resources: + requests: + storage: 2Gi + storageClassName: nfs \ No newline at end of file diff --git a/pre_install_report/library/utils/viya_constants.py b/pre_install_report/library/utils/viya_constants.py index 7c573a9..1ccd0f8 100644 --- a/pre_install_report/library/utils/viya_constants.py +++ b/pre_install_report/library/utils/viya_constants.py @@ -31,6 +31,9 @@ PERM_PERMISSIONS = "Permissions" PERM_DEPLOYMENT = "Deployment" PERM_SERVICE = "Service" +PERM_AZ_FILE = "AzureFile Storage RWX" +PERM_AZ_FILE_PR = "AzureFilePremium Storage RWX" +PERM_AZ_DISK = "AzureDisk Storage" PERM_INGRESS = "Ingress" PERM_SAMPLE_STATUS = "Sample Status" PERM_CRD = "Custom Resource Definition" diff --git a/pre_install_report/pre_install_report.py b/pre_install_report/pre_install_report.py index 4f16535..3c5c091 100644 --- a/pre_install_report/pre_install_report.py +++ b/pre_install_report/pre_install_report.py @@ -26,6 +26,7 @@ from viya_ark_library.k8s.sas_k8s_errors import NamespaceNotFoundError from viya_ark_library.k8s.sas_kubectl import Kubectl from viya_ark_library.logging import ViyaARKLogger +from viya_ark_library.lrp_indicator import LRPIndicator PRP = pprint.PrettyPrinter(indent=4) @@ -224,14 +225,16 @@ def main(argv): sys.exit(viya_messages.NAMESPACE_NOT_FOUND_RC_) check_limits = _read_properties_file() - sas_pre_check_report: ViyaPreInstallCheck = ViyaPreInstallCheck(sas_logger, - check_limits["VIYA_KUBELET_VERSION_MIN"], - check_limits["VIYA_MIN_WORKER_ALLOCATABLE_CPU"], - check_limits["VIYA_MIN_AGGREGATE_WORKER_CPU_CORES"], - check_limits["VIYA_MIN_ALLOCATABLE_WORKER_MEMORY"], - check_limits["VIYA_MIN_AGGREGATE_WORKER_MEMORY"]) + with LRPIndicator(enter_message="Gathering facts"): + sas_pre_check_report: ViyaPreInstallCheck = \ + ViyaPreInstallCheck(sas_logger, check_limits["VIYA_KUBELET_VERSION_MIN"], + check_limits["VIYA_MIN_WORKER_ALLOCATABLE_CPU"], + check_limits["VIYA_MIN_AGGREGATE_WORKER_CPU_CORES"], + check_limits["VIYA_MIN_ALLOCATABLE_WORKER_MEMORY"], + check_limits["VIYA_MIN_AGGREGATE_WORKER_MEMORY"]) # gather the details for the report try: + print() sas_pre_check_report.check_details(kubectl, ingress_port, ingress_host, ingress_controller, output_dir) except RuntimeError as e: print() diff --git a/pre_install_report/test/test_data/json_data/azure_file_pvc.json b/pre_install_report/test/test_data/json_data/azure_file_pvc.json new file mode 100644 index 0000000..3bef16a --- /dev/null +++ b/pre_install_report/test/test_data/json_data/azure_file_pvc.json @@ -0,0 +1,21 @@ +{"apiVersion": "v1", + "kind": "PersistentVolumeClaim", + "metadata": {"annotations": {"kubectl.kubernetes.io/last-applied-configuration": "{'apiVersion':'v1','kind':'PersistentVolumeClaim','metadata':{'annotations':{},'name':'pvc-azurefile','namespace':'deployment'},'spec':{'accessModes':['ReadWriteMany'],'resources':{'requests':{'storage':'5000Gi'}},'storageClassName':'azurefile'}}\n", + "pv.kubernetes.io/bind-completed": "yes", + "pv.kubernetes.io/bound-by-controller": "yes", + "volume.beta.kubernetes.io/storage-provisioner": "kubernetes.io/azure-file"}, + "creationTimestamp": "2020-09-25T19:43:49Z", + "finalizers": ["kubernetes.io/pvc-protection"], + "name": "pvc-azurefile", + "namespace": "deployment", + "resourceVersion": "59167", + "selfLink": "/api/v1/namespaces/deployment/persistentvolumeclaims/pvc-azurefile", + "uid": "acead292-46fb-4e13-a99a-b74892084f63"}, + "spec": {"accessModes": ["ReadWriteMany"], + "resources": {"requests": {"storage": "5000Gi"}}, + "storageClassName": "azurefile", + "volumeMode": "Filesystem", + "volumeName": "pvc-acead292-46fb-4e13-a99a-b74892084f63"}, + "status": {"accessModes": ["ReadWriteMany"], + "capacity": {"storage": "5000Gi"}, + "phase": "Bound"}} \ No newline at end of file diff --git a/pre_install_report/test/test_data/json_data/azure_sc_detailed.json b/pre_install_report/test/test_data/json_data/azure_sc_detailed.json new file mode 100644 index 0000000..3706b37 --- /dev/null +++ b/pre_install_report/test/test_data/json_data/azure_sc_detailed.json @@ -0,0 +1,111 @@ +{ + "apiVersion": "v1", + "items": [ + { + "allowVolumeExpansion": true, + "apiVersion": "storage.k8s.io/v1", + "kind": "StorageClass", + "metadata": { + "annotations": { + "kubectl.kubernetes.io/last-applied-configuration": "{\"allowVolumeExpansion\":true,\"apiVersion\":\"storage.k8s.io/v1beta1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{},\"labels\":{\"kubernetes.io/cluster-service\":\"true\"},\"name\":\"azurefile\"},\"parameters\":{\"skuName\":\"Standard_LRS\"},\"provisioner\":\"kubernetes.io/azure-file\"}\n" + }, + "creationTimestamp": "2020-05-19T14:53:55Z", + "labels": { + "kubernetes.io/cluster-service": "true" + }, + "name": "azurefile", + "resourceVersion": "1158", + "selfLink": "/apis/storage.k8s.io/v1/storageclasses/azurefile", + "uid": "afbfdc7f-61b5-4189-ad81-277526cf0821" + }, + "parameters": { + "skuName": "Standard_LRS" + }, + "provisioner": "kubernetes.io/azure-file", + "reclaimPolicy": "Delete", + "volumeBindingMode": "Immediate" + }, + { + "allowVolumeExpansion": true, + "apiVersion": "storage.k8s.io/v1", + "kind": "StorageClass", + "metadata": { + "annotations": { + "kubectl.kubernetes.io/last-applied-configuration": "{\"allowVolumeExpansion\":true,\"apiVersion\":\"storage.k8s.io/v1beta1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{},\"labels\":{\"kubernetes.io/cluster-service\":\"true\"},\"name\":\"azurefile-premium\"},\"parameters\":{\"skuName\":\"Premium_LRS\"},\"provisioner\":\"kubernetes.io/azure-file\"}\n" + }, + "creationTimestamp": "2020-05-19T14:53:55Z", + "labels": { + "kubernetes.io/cluster-service": "true" + }, + "name": "azurefile-premium", + "resourceVersion": "282", + "selfLink": "/apis/storage.k8s.io/v1/storageclasses/azurefile-premium", + "uid": "07887ed9-bb68-4cfa-a8d7-1b2671930cf4" + }, + "parameters": { + "skuName": "Premium_LRS" + }, + "provisioner": "kubernetes.io/azure-file", + "reclaimPolicy": "Delete", + "volumeBindingMode": "Immediate" + }, + { + "allowVolumeExpansion": true, + "apiVersion": "storage.k8s.io/v1", + "kind": "StorageClass", + "metadata": { + "annotations": { + "kubectl.kubernetes.io/last-applied-configuration": "{\"allowVolumeExpansion\":true,\"apiVersion\":\"storage.k8s.io/v1beta1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.beta.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"kubernetes.io/cluster-service\":\"true\"},\"name\":\"default\"},\"parameters\":{\"cachingmode\":\"ReadOnly\",\"kind\":\"Managed\",\"storageaccounttype\":\"StandardSSD_LRS\"},\"provisioner\":\"kubernetes.io/azure-disk\"}\n", + "storageclass.beta.kubernetes.io/is-default-class": "true" + }, + "creationTimestamp": "2020-08-31T19:07:26Z", + "labels": { + "kubernetes.io/cluster-service": "true" + }, + "name": "default", + "resourceVersion": "26874700", + "selfLink": "/apis/storage.k8s.io/v1/storageclasses/default", + "uid": "53c3545d-f24e-44c0-91ea-d1276bed7258" + }, + "parameters": { + "cachingmode": "ReadOnly", + "kind": "Managed", + "storageaccounttype": "StandardSSD_LRS" + }, + "provisioner": "kubernetes.io/azure-disk", + "reclaimPolicy": "Delete", + "volumeBindingMode": "Immediate" + }, + { + "allowVolumeExpansion": true, + "apiVersion": "storage.k8s.io/v1", + "kind": "StorageClass", + "metadata": { + "annotations": { + "kubectl.kubernetes.io/last-applied-configuration": "{\"allowVolumeExpansion\":true,\"apiVersion\":\"storage.k8s.io/v1beta1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{},\"labels\":{\"kubernetes.io/cluster-service\":\"true\"},\"name\":\"managed-premium\"},\"parameters\":{\"cachingmode\":\"ReadOnly\",\"kind\":\"Managed\",\"storageaccounttype\":\"Premium_LRS\"},\"provisioner\":\"kubernetes.io/azure-disk\"}\n" + }, + "creationTimestamp": "2020-08-31T19:07:26Z", + "labels": { + "kubernetes.io/cluster-service": "true" + }, + "name": "managed-premium", + "resourceVersion": "26874701", + "selfLink": "/apis/storage.k8s.io/v1/storageclasses/managed-premium", + "uid": "6be4df02-c390-460d-840d-40e159886625" + }, + "parameters": { + "cachingmode": "ReadOnly", + "kind": "Managed", + "storageaccounttype": "Premium_LRS" + }, + "provisioner": "kubernetes.io/azure-disk", + "reclaimPolicy": "Delete", + "volumeBindingMode": "Immediate" + } + ], + "kind": "List", + "metadata": { + "resourceVersion": "", + "selfLink": "" + } +} diff --git a/pre_install_report/test/test_data/json_data/azure_storage_class.json b/pre_install_report/test/test_data/json_data/azure_storage_class.json new file mode 100644 index 0000000..2bbe17e --- /dev/null +++ b/pre_install_report/test/test_data/json_data/azure_storage_class.json @@ -0,0 +1,158 @@ +{ + "apiVersion": "v1", + "items": [ + { + "allowVolumeExpansion": true, + "apiVersion": "storage.k8s.io/v1", + "kind": "StorageClass", + "metadata": { + "annotations": { + "kubectl.kubernetes.io/last-applied-configuration": "{\"allowVolumeExpansion\":true,\"apiVersion\":\"storage.k8s.io/v1beta1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{},\"labels\":{\"kubernetes.io/cluster-service\":\"true\"},\"name\":\"azurefile\"},\"parameters\":{\"skuName\":\"Standard_LRS\"},\"provisioner\":\"kubernetes.io/azure-file\"}\n" + }, + "creationTimestamp": "2020-09-25T17:04:07Z", + "labels": { + "kubernetes.io/cluster-service": "true" + }, + "name": "azurefile", + "resourceVersion": "283", + "selfLink": "/apis/storage.k8s.io/v1/storageclasses/azurefile", + "uid": "edb4d8cd-3b94-4b95-ba77-8a86e1a6a4fc" + }, + "parameters": { + "skuName": "Standard_LRS" + }, + "provisioner": "kubernetes.io/azure-file", + "reclaimPolicy": "Delete", + "volumeBindingMode": "Immediate" + }, + { + "allowVolumeExpansion": true, + "apiVersion": "storage.k8s.io/v1", + "kind": "StorageClass", + "metadata": { + "annotations": { + "kubectl.kubernetes.io/last-applied-configuration": "{\"allowVolumeExpansion\":true,\"apiVersion\":\"storage.k8s.io/v1beta1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{},\"labels\":{\"kubernetes.io/cluster-service\":\"true\"},\"name\":\"azurefile-premium\"},\"parameters\":{\"skuName\":\"Premium_LRS\"},\"provisioner\":\"kubernetes.io/azure-file\"}\n" + }, + "creationTimestamp": "2020-09-25T17:04:07Z", + "labels": { + "kubernetes.io/cluster-service": "true" + }, + "name": "azurefile-premium", + "resourceVersion": "284", + "selfLink": "/apis/storage.k8s.io/v1/storageclasses/azurefile-premium", + "uid": "6c9b2609-4271-4e9d-96c1-c76e9aaf33ea" + }, + "parameters": { + "skuName": "Premium_LRS" + }, + "provisioner": "kubernetes.io/azure-file", + "reclaimPolicy": "Delete", + "volumeBindingMode": "Immediate" + }, + { + "allowVolumeExpansion": true, + "apiVersion": "storage.k8s.io/v1", + "kind": "StorageClass", + "metadata": { + "annotations": { + "kubectl.kubernetes.io/last-applied-configuration": "{\"allowVolumeExpansion\":true,\"apiVersion\":\"storage.k8s.io/v1beta1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.beta.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"kubernetes.io/cluster-service\":\"true\"},\"name\":\"default\"},\"parameters\":{\"cachingmode\":\"ReadOnly\",\"kind\":\"Managed\",\"storageaccounttype\":\"StandardSSD_LRS\"},\"provisioner\":\"kubernetes.io/azure-disk\"}\n", + "storageclass.beta.kubernetes.io/is-default-class": "true" + }, + "creationTimestamp": "2020-09-25T17:04:07Z", + "labels": { + "kubernetes.io/cluster-service": "true" + }, + "name": "default", + "resourceVersion": "281", + "selfLink": "/apis/storage.k8s.io/v1/storageclasses/default", + "uid": "97a7591d-5d6f-4e6d-8788-a909abf1fe2c" + }, + "parameters": { + "cachingmode": "ReadOnly", + "kind": "Managed", + "storageaccounttype": "StandardSSD_LRS" + }, + "provisioner": "kubernetes.io/azure-disk", + "reclaimPolicy": "Delete", + "volumeBindingMode": "Immediate" + }, + { + "allowVolumeExpansion": true, + "apiVersion": "storage.k8s.io/v1", + "kind": "StorageClass", + "metadata": { + "annotations": { + "kubectl.kubernetes.io/last-applied-configuration": "{\"allowVolumeExpansion\":true,\"apiVersion\":\"storage.k8s.io/v1beta1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{},\"labels\":{\"kubernetes.io/cluster-service\":\"true\"},\"name\":\"managed-premium\"},\"parameters\":{\"cachingmode\":\"ReadOnly\",\"kind\":\"Managed\",\"storageaccounttype\":\"Premium_LRS\"},\"provisioner\":\"kubernetes.io/azure-disk\"}\n" + }, + "creationTimestamp": "2020-09-25T17:04:07Z", + "labels": { + "kubernetes.io/cluster-service": "true" + }, + "name": "managed-premium", + "resourceVersion": "282", + "selfLink": "/apis/storage.k8s.io/v1/storageclasses/managed-premium", + "uid": "1c7bd224-a13d-460e-a4c3-adc5bb93822d" + }, + "parameters": { + "cachingmode": "ReadOnly", + "kind": "Managed", + "storageaccounttype": "Premium_LRS" + }, + "provisioner": "kubernetes.io/azure-disk", + "reclaimPolicy": "Delete", + "volumeBindingMode": "Immediate" + }, + { + "allowVolumeExpansion": true, + "apiVersion": "storage.k8s.io/v1", + "kind": "StorageClass", + "metadata": { + "creationTimestamp": "2020-09-25T17:14:22Z", + "labels": { + "kubernetes.io/cluster-service": "true", + "sas.com/admin": "namespace", + "sas.com/deployment": "sas-viya" + }, + "name": "sas-rwo", + "resourceVersion": "2426", + "selfLink": "/apis/storage.k8s.io/v1/storageclasses/sas-rwo", + "uid": "2b0ab4d7-ca28-47dd-96c4-aba0eebfe896" + }, + "parameters": { + "kind": "managed", + "storageaccounttype": "Standard_LRS" + }, + "provisioner": "kubernetes.io/azure-disk", + "reclaimPolicy": "Delete", + "volumeBindingMode": "WaitForFirstConsumer" + }, + { + "allowVolumeExpansion": true, + "apiVersion": "storage.k8s.io/v1", + "kind": "StorageClass", + "metadata": { + "creationTimestamp": "2020-09-25T17:14:22Z", + "labels": { + "kubernetes.io/cluster-service": "true", + "sas.com/admin": "namespace", + "sas.com/deployment": "sas-viya" + }, + "name": "sas-rwx", + "resourceVersion": "2427", + "selfLink": "/apis/storage.k8s.io/v1/storageclasses/sas-rwx", + "uid": "97c97e67-43f2-4cdc-8a94-00b49b1bc516" + }, + "parameters": { + "skuName": "Standard_LRS" + }, + "provisioner": "kubernetes.io/azure-file", + "reclaimPolicy": "Delete", + "volumeBindingMode": "Immediate" + } + ], + "kind": "List", + "metadata": { + "resourceVersion": "", + "selfLink": "" + } +} diff --git a/viya_ark_library/k8s/sas_k8s_objects.py b/viya_ark_library/k8s/sas_k8s_objects.py index 7c8988d..b322820 100644 --- a/viya_ark_library/k8s/sas_k8s_objects.py +++ b/viya_ark_library/k8s/sas_k8s_objects.py @@ -384,6 +384,7 @@ class Keys(object): OPERATING_SYSTEM = "operatingSystem" OS_IMAGE = "osImage" OWNER_REFERENCES = "ownerReferences" + PARAMETERS = "parameters" PATH = "path" PATHS = "paths" PHASE = "phase" @@ -394,6 +395,7 @@ class Keys(object): PORT = "port" PORTS = "ports" PROTOCOL = "protocol" + PROVISIONER = "provisioner" PUBLISH_NODE_PORT_SERVICE = "publishNodePortService" READY = "ready" READY_REPLICAS = "readyReplicas" @@ -448,6 +450,7 @@ class Kinds(object): REPLICA_SET = "ReplicaSet" SERVICE = "Service" STATEFUL_SET = "StatefulSet" + STORAGECLASS = "sc" def __init__(self, resource: Union[Dict, AnyStr]) -> None: """ @@ -743,3 +746,23 @@ def as_dict(self) -> Dict: :return: A native 'dict' version of this Kubernetes resource. """ return self._resource + + def get_parameter_value(self, key: Text) -> Optional[Any]: + """ + Returns the given key's value from the 'parameters' dictionary. + + :param key: The key of the value to return. + :return: The value mapped to the given key, or None if the given key doesn't exist. + """ + try: + return self._resource[self.Keys.PARAMETERS][key] + except KeyError: + return None + + def get_provisioner(self) -> Optional[AnyStr]: + """ + Returns the 'metadata.creationTimestamp' value for this Resource. + + :return: This Resource's 'metadata.creationTimestamp' value. + """ + return self._resource.get(self.Keys.PROVISIONER) \ No newline at end of file From 944420dee2fb88f024a247c26b7ddb3582bf3415 Mon Sep 17 00:00:00 2001 From: Latha Sivakumar Date: Fri, 2 Oct 2020 11:38:11 -0400 Subject: [PATCH 07/31] (lasivasas-issue-26) Commit files Check Permissions for azure Persistent Volume Claims v2 --- viya_ark_library/k8s/sas_k8s_objects.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/viya_ark_library/k8s/sas_k8s_objects.py b/viya_ark_library/k8s/sas_k8s_objects.py index b322820..8ed7c32 100644 --- a/viya_ark_library/k8s/sas_k8s_objects.py +++ b/viya_ark_library/k8s/sas_k8s_objects.py @@ -765,4 +765,4 @@ def get_provisioner(self) -> Optional[AnyStr]: :return: This Resource's 'metadata.creationTimestamp' value. """ - return self._resource.get(self.Keys.PROVISIONER) \ No newline at end of file + return self._resource.get(self.Keys.PROVISIONER) From 250c9ca020b8c8892fd93795f96e83e76a027bcf Mon Sep 17 00:00:00 2001 From: Amy Ho Date: Tue, 6 Oct 2020 12:46:39 -0500 Subject: [PATCH 08/31] (issue-21) Add option to download_pod_logs to output logs in original JSON format --- download_pod_logs/download_pod_logs.py | 5 +- download_pod_logs/model.py | 51 ++++++++++--------- .../test/data/expected_usage_output.txt | 2 + 3 files changed, 34 insertions(+), 24 deletions(-) diff --git a/download_pod_logs/download_pod_logs.py b/download_pod_logs/download_pod_logs.py index 501887e..4f69bfc 100644 --- a/download_pod_logs/download_pod_logs.py +++ b/download_pod_logs/download_pod_logs.py @@ -98,6 +98,9 @@ def main(argv: List): help="Wait time, in seconds, before terminating a log-gathering process. Defaults to " f"\"{PodLogDownloader.DEFAULT_WAIT}\".") + arg_parser.add_argument( + "--no-parse", action="store_true", dest="noparse", help="Download logfile in original JSON format.") + # add positional arguments arg_parser.add_argument( "selected_components", default=None, nargs="*", @@ -127,7 +130,7 @@ def main(argv: List): log_downloader = PodLogDownloader(kubectl=kubectl, output_dir=args.output_dir, processes=args.processes, - wait=args.wait) + wait=args.wait, noparse=args.noparse) except AttributeError as e: print() print(f"ERROR: {e}", sys.stderr) diff --git a/download_pod_logs/model.py b/download_pod_logs/model.py index 086abe1..4984f2b 100644 --- a/download_pod_logs/model.py +++ b/download_pod_logs/model.py @@ -38,7 +38,7 @@ class PodLogDownloader(object): DEFAULT_WAIT = 30 def __init__(self, kubectl: KubectlInterface, output_dir: Text = DEFAULT_OUTPUT_DIR, - processes: int = DEFAULT_PROCESSES, wait: int = DEFAULT_WAIT) -> None: + processes: int = DEFAULT_PROCESSES, wait: int = DEFAULT_WAIT, noparse: bool = False) -> None: """ PodLogDownloader is responsible for configuring and executing asynchronous log gathering for all or a select number of pods in a given namespace. @@ -61,6 +61,7 @@ def __init__(self, kubectl: KubectlInterface, output_dir: Text = DEFAULT_OUTPUT_ self._processes = processes self._pool = Pool(processes=processes) self._wait = wait + self._noparse = noparse def download_logs(self, selected_components: Optional[List[Text]] = None, tail: int = DEFAULT_TAIL) \ -> Tuple[Text, List[Text], List[Tuple[Text, Text]]]: @@ -136,7 +137,8 @@ def download_logs(self, selected_components: Optional[List[Text]] = None, tail: # create the list of pooled asynchronous processes write_log_processes: List[_LogDownloadProcess] = list() for pod in selected_pods: - process = self._pool.apply_async(self._write_log, args=(self._kubectl, pod, tail, self._output_dir)) + process = self._pool.apply_async(self._write_log, args=(self._kubectl, pod, tail, self._output_dir, + self._noparse)) download_process = _LogDownloadProcess(pod.get_name(), process) write_log_processes.append(download_process) @@ -156,7 +158,7 @@ def download_logs(self, selected_components: Optional[List[Text]] = None, tail: return os.path.abspath(self._output_dir), timeout_pods, error_pods @staticmethod - def _write_log(kubectl: KubectlInterface, pod: KubernetesResource, tail: int, output_dir: Text) -> \ + def _write_log(kubectl: KubectlInterface, pod: KubernetesResource, tail: int, output_dir: Text, noparse: bool) -> \ Optional[Tuple[Text, Text]]: """ Internal method used for gathering the status and log for each container in the provided pod and writing @@ -193,20 +195,22 @@ def _write_log(kubectl: KubectlInterface, pod: KubernetesResource, tail: int, ou # open the output file for writing with open(output_file_path, "w", encoding="utf-8") as output_file: - # template for a line in the log header - header_tmpl = "# {:<25} {}\n" - - # write status information at the top of the log - output_file.write(("#" * 50) + "\n") - output_file.write("# Container Status\n") - output_file.write("#\n") - output_file.writelines(header_tmpl.format("sas-component-name:", pod.get_sas_component_name())) - output_file.writelines(header_tmpl.format("sas-component-version:", pod.get_sas_component_version())) - for key, value in container_status.get_headers_dict().items(): - output_file.writelines(header_tmpl.format(f"{key}:", value)) - - # close the status header - output_file.writelines(("#" * 50) + "\n\n") + if (not noparse): + # template for a line in the log header + header_tmpl = "# {:<25} {}\n" + + # write status information at the top of the log + output_file.write(("#" * 50) + "\n") + output_file.write("# Container Status\n") + output_file.write("#\n") + output_file.writelines(header_tmpl.format("sas-component-name:", pod.get_sas_component_name())) + output_file.writelines(header_tmpl.format("sas-component-version:", + pod.get_sas_component_version())) + for key, value in container_status.get_headers_dict().items(): + output_file.writelines(header_tmpl.format(f"{key}:", value)) + + # close the status header + output_file.writelines(("#" * 50) + "\n\n") # create tuple to hold information about failed containers/pods err_info: Optional[Tuple[Text, Text]] = None @@ -220,15 +224,16 @@ def _write_log(kubectl: KubectlInterface, pod: KubernetesResource, tail: int, ou err_msg = (f"ERROR: A log could not be retrieved for the container [{container_status.get_name()}] " f"in pod [{pod.get_name()}] in namespace [{kubectl.get_namespace()}]") log: List[AnyStr] = [err_msg] - err_info = (container_status.get_name(), pod.get_name()) # parse any structured logging - file_content = SASStructuredLoggingParser.parse_log(log) - - # write the retrieved log to the file - output_file.write("Beginning log...\n\n") - output_file.write("\n".join(file_content)) + if (noparse): + output_file.write("\n".join(log)) + else: + file_content = SASStructuredLoggingParser.parse_log(log) + # write the retrieved log to the file + output_file.write("Beginning log...\n\n") + output_file.write("\n".join(file_content)) return err_info diff --git a/download_pod_logs/test/data/expected_usage_output.txt b/download_pod_logs/test/data/expected_usage_output.txt index c7a910b..d5b0d8e 100644 --- a/download_pod_logs/test/data/expected_usage_output.txt +++ b/download_pod_logs/test/data/expected_usage_output.txt @@ -1,5 +1,6 @@ usage: viya-ark.py download-pod-logs [-h] [-n NAMESPACE] [-o OUTPUT_DIR] [-p PROCESSES] [-t TAIL] [-w WAIT] + [--no-parse] [selected_components [selected_components ...]] Download log files for all or a select list of pods. @@ -25,3 +26,4 @@ optional arguments: "25000". -w WAIT, --wait WAIT Wait time, in seconds, before terminating a log- gathering process. Defaults to "30". + --no-parse Download logfile in original JSON format. From 0c83b1b72787aced9dfff6dc7ff67d42964002a1 Mon Sep 17 00:00:00 2001 From: Latha Sivakumar Date: Tue, 6 Oct 2020 15:29:04 -0400 Subject: [PATCH 09/31] (lasivasas-issue-26) Commit Indicate Permissions of Skipped for Azure Persistent Volume Claims if Storage Classes cannot be read --- .../library/pre_install_check_permissions.py | 26 +++++++++++++++++-- .../library/utils/viya_constants.py | 1 + 2 files changed, 25 insertions(+), 2 deletions(-) diff --git a/pre_install_report/library/pre_install_check_permissions.py b/pre_install_report/library/pre_install_check_permissions.py index 85d8a8d..df68dc3 100644 --- a/pre_install_report/library/pre_install_check_permissions.py +++ b/pre_install_report/library/pre_install_check_permissions.py @@ -169,7 +169,7 @@ def manage_pvc(self, action, check): """ storage_class_names = self.get_storage_classes_details() - if storage_class_names: + if len(storage_class_names) > 0: for value in storage_class_names: # check for 'kubernetes.io/azure-file', 'azurefile-disk' self.logger.info("Storage class: {}".format(value)) @@ -185,16 +185,38 @@ def manage_pvc(self, action, check): PVC_AZURE_MANAGED_PREMIUM, PVC_AZURE_MANAGED_PREMIUM_NAME, viya_constants.PERM_AZ_DISK) else: - self.logger.error("Storage class not attempted {}".format(value)) + self.logger.debug("Storage class not attempted {}".format(value)) + else: + self._skip_pvc_check() self.logger.debug("Namespaced results {}".format(pprint.pformat(self.namespace_admin_permission_data))) + def _skip_pvc_check(self): + self.namespace_admin_permission_data[viya_constants.PERM_AZ_FILE] = viya_constants.PERM_SKIPPING + self.namespace_admin_permission_data[viya_constants.PERM_AZ_FILE_PR] = viya_constants.PERM_SKIPPING + self.namespace_admin_permission_data[viya_constants.PERM_AZ_DISK] = viya_constants.PERM_SKIPPING + self.namespace_admin_permission_data[viya_constants.PERM_DELETE + viya_constants.PERM_AZ_FILE]\ + = viya_constants.PERM_SKIPPING + self.namespace_admin_permission_data[viya_constants.PERM_DELETE + viya_constants.PERM_AZ_FILE_PR] \ + = viya_constants.PERM_SKIPPING + self.namespace_admin_permission_data[viya_constants.PERM_DELETE + viya_constants.PERM_AZ_DISK] \ + = viya_constants.PERM_SKIPPING + def get_sc_resources(self): """ Uses viyaARK_library common library to retrieve kubernetes resources kind=storage class return k8s_resource: List of Kubernetes resources """ self._storage_class_sc = self.utils.get_resources(KubernetesResource.Kinds.STORAGECLASS) + if not self._storage_class_sc: + self.cluster_admin_permission_data[viya_constants.PERM_GET + viya_constants.PERM_STORAGE_CLASS] = \ + viya_constants.INSUFFICIENT_PERMS + self.cluster_admin_permission_aggregate[viya_constants.PERM_GET + viya_constants.PERM_STORAGE_CLASS] = \ + viya_constants.INSUFFICIENT_PERMS + else: + self.cluster_admin_permission_data[viya_constants.PERM_GET + viya_constants.PERM_STORAGE_CLASS] = \ + viya_constants.ADEQUATE_PERMS + return def get_storage_classes_details(self): """ diff --git a/pre_install_report/library/utils/viya_constants.py b/pre_install_report/library/utils/viya_constants.py index 1ccd0f8..ebe5b9c 100644 --- a/pre_install_report/library/utils/viya_constants.py +++ b/pre_install_report/library/utils/viya_constants.py @@ -38,6 +38,7 @@ PERM_SAMPLE_STATUS = "Sample Status" PERM_CRD = "Custom Resource Definition" PERM_CR = "Custom Resource" +PERM_STORAGE_CLASS = "Storage Class" PERM_DELETE = "Delete " PERM_CREATE = "Create " PERM_GET = "Get " From 8bf1610d68b50c462309d52e07dbe6e3c14e6655 Mon Sep 17 00:00:00 2001 From: Amy Ho Date: Wed, 7 Oct 2020 14:33:10 -0500 Subject: [PATCH 10/31] (issue-21) Add option to download_pod_logs to output logs in original JSON format --- download_pod_logs/download_pod_logs.py | 7 +-- download_pod_logs/model.py | 43 +++++++++---------- .../test/data/expected_usage_output.txt | 2 +- 3 files changed, 26 insertions(+), 26 deletions(-) diff --git a/download_pod_logs/download_pod_logs.py b/download_pod_logs/download_pod_logs.py index 4f69bfc..3baee5b 100644 --- a/download_pod_logs/download_pod_logs.py +++ b/download_pod_logs/download_pod_logs.py @@ -99,7 +99,8 @@ def main(argv: List): f"\"{PodLogDownloader.DEFAULT_WAIT}\".") arg_parser.add_argument( - "--no-parse", action="store_true", dest="noparse", help="Download logfile in original JSON format.") + "--no-parse", action="store_true", dest="noparse", + help="Download log files in original format without parsing.") # add positional arguments arg_parser.add_argument( @@ -130,7 +131,7 @@ def main(argv: List): log_downloader = PodLogDownloader(kubectl=kubectl, output_dir=args.output_dir, processes=args.processes, - wait=args.wait, noparse=args.noparse) + wait=args.wait) except AttributeError as e: print() print(f"ERROR: {e}", sys.stderr) @@ -142,7 +143,7 @@ def main(argv: List): print() with LRPIndicator(enter_message="Downloading pod logs"): log_dir, timeout_pods, error_pods = log_downloader.download_logs( - selected_components=args.selected_components, tail=args.tail) + selected_components=args.selected_components, tail=args.tail, noparse=args.noparse) # print any containers that encountered errors, if present if len(error_pods) > 0: diff --git a/download_pod_logs/model.py b/download_pod_logs/model.py index 4984f2b..0e27a1e 100644 --- a/download_pod_logs/model.py +++ b/download_pod_logs/model.py @@ -38,7 +38,7 @@ class PodLogDownloader(object): DEFAULT_WAIT = 30 def __init__(self, kubectl: KubectlInterface, output_dir: Text = DEFAULT_OUTPUT_DIR, - processes: int = DEFAULT_PROCESSES, wait: int = DEFAULT_WAIT, noparse: bool = False) -> None: + processes: int = DEFAULT_PROCESSES, wait: int = DEFAULT_WAIT) -> None: """ PodLogDownloader is responsible for configuring and executing asynchronous log gathering for all or a select number of pods in a given namespace. @@ -61,16 +61,16 @@ def __init__(self, kubectl: KubectlInterface, output_dir: Text = DEFAULT_OUTPUT_ self._processes = processes self._pool = Pool(processes=processes) self._wait = wait - self._noparse = noparse - def download_logs(self, selected_components: Optional[List[Text]] = None, tail: int = DEFAULT_TAIL) \ - -> Tuple[Text, List[Text], List[Tuple[Text, Text]]]: + def download_logs(self, selected_components: Optional[List[Text]] = None, tail: int = DEFAULT_TAIL, + noparse: bool = False) -> Tuple[Text, List[Text], List[Tuple[Text, Text]]]: """ Downloads the log files for all or a select group of pods and their containers. A status summary is prepended to the downloaded log file. :param selected_components: List of component names for which logs should be retrieved. :param tail: Lines of recent log file to retrieve. + :param noparse: log file in original form. :raises KubectlRequestForbiddenError: If list pods is forbidden in the given namespace. :raises NoPodsError: If pods can be listed but no pods are found in the given namespace. @@ -138,7 +138,7 @@ def download_logs(self, selected_components: Optional[List[Text]] = None, tail: write_log_processes: List[_LogDownloadProcess] = list() for pod in selected_pods: process = self._pool.apply_async(self._write_log, args=(self._kubectl, pod, tail, self._output_dir, - self._noparse)) + noparse)) download_process = _LogDownloadProcess(pod.get_name(), process) write_log_processes.append(download_process) @@ -195,22 +195,21 @@ def _write_log(kubectl: KubectlInterface, pod: KubernetesResource, tail: int, ou # open the output file for writing with open(output_file_path, "w", encoding="utf-8") as output_file: - if (not noparse): - # template for a line in the log header - header_tmpl = "# {:<25} {}\n" - - # write status information at the top of the log - output_file.write(("#" * 50) + "\n") - output_file.write("# Container Status\n") - output_file.write("#\n") - output_file.writelines(header_tmpl.format("sas-component-name:", pod.get_sas_component_name())) - output_file.writelines(header_tmpl.format("sas-component-version:", - pod.get_sas_component_version())) - for key, value in container_status.get_headers_dict().items(): - output_file.writelines(header_tmpl.format(f"{key}:", value)) - - # close the status header - output_file.writelines(("#" * 50) + "\n\n") + # template for a line in the log header + header_tmpl = "# {:<25} {}\n" + + # write status information at the top of the log + output_file.write(("#" * 50) + "\n") + output_file.write("# Container Status\n") + output_file.write("#\n") + output_file.writelines(header_tmpl.format("sas-component-name:", pod.get_sas_component_name())) + output_file.writelines(header_tmpl.format("sas-component-version:", + pod.get_sas_component_version())) + for key, value in container_status.get_headers_dict().items(): + output_file.writelines(header_tmpl.format(f"{key}:", value)) + + # close the status header + output_file.writelines(("#" * 50) + "\n\n") # create tuple to hold information about failed containers/pods err_info: Optional[Tuple[Text, Text]] = None @@ -227,7 +226,7 @@ def _write_log(kubectl: KubectlInterface, pod: KubernetesResource, tail: int, ou err_info = (container_status.get_name(), pod.get_name()) # parse any structured logging - if (noparse): + if noparse: output_file.write("\n".join(log)) else: file_content = SASStructuredLoggingParser.parse_log(log) diff --git a/download_pod_logs/test/data/expected_usage_output.txt b/download_pod_logs/test/data/expected_usage_output.txt index d5b0d8e..b05615d 100644 --- a/download_pod_logs/test/data/expected_usage_output.txt +++ b/download_pod_logs/test/data/expected_usage_output.txt @@ -26,4 +26,4 @@ optional arguments: "25000". -w WAIT, --wait WAIT Wait time, in seconds, before terminating a log- gathering process. Defaults to "30". - --no-parse Download logfile in original JSON format. + --no-parse Download log files in original format without parsing. From 6385ac063200866933ebe4799c2d11d241110e15 Mon Sep 17 00:00:00 2001 From: Amy Ho Date: Thu, 8 Oct 2020 14:04:53 -0500 Subject: [PATCH 11/31] (issue-21) Add option to download_pod_logs to output logs in original JSON format --- download_pod_logs/test/test_model.py | 3 +++ 1 file changed, 3 insertions(+) diff --git a/download_pod_logs/test/test_model.py b/download_pod_logs/test/test_model.py index a03c37a..305cdf3 100644 --- a/download_pod_logs/test/test_model.py +++ b/download_pod_logs/test/test_model.py @@ -27,6 +27,9 @@ #################################################################### # PodLogDownloader Tests ### +# Notes: ### +# No explicit tests for _write_log() because it is tested in ### +# the test_download_logs() ### #################################################################### def test_init_default() -> None: """ From 8bada06de11b5d9f021341af57b9746718d2cf38 Mon Sep 17 00:00:00 2001 From: Amy Ho Date: Mon, 19 Oct 2020 10:21:07 -0500 Subject: [PATCH 12/31] (issue-32) download-pod-logs TypeError: 'NoneType' object is not iterable --- download_pod_logs/download_pod_logs.py | 10 ++- download_pod_logs/model.py | 12 +-- .../test/test_download_pod_logs.py | 75 ++++++++++++++++++- 3 files changed, 88 insertions(+), 9 deletions(-) diff --git a/download_pod_logs/download_pod_logs.py b/download_pod_logs/download_pod_logs.py index 3baee5b..4de0caa 100644 --- a/download_pod_logs/download_pod_logs.py +++ b/download_pod_logs/download_pod_logs.py @@ -166,9 +166,15 @@ def main(argv: List): print("\nThe wait time can be increased using the \"--wait=\" option.") - # print output directory print() - print(f"Log files created in: {log_dir}") + # check folder is empty + if len(os.listdir(log_dir)) == 0: + os.rmdir(log_dir) + print("No log files created.") + else: + # print output directory + print(f"Log files created in: {log_dir}") + print() except (NoMatchingPodsError, NoPodsError) as e: print() diff --git a/download_pod_logs/model.py b/download_pod_logs/model.py index 0e27a1e..c9dfc40 100644 --- a/download_pod_logs/model.py +++ b/download_pod_logs/model.py @@ -158,8 +158,8 @@ def download_logs(self, selected_components: Optional[List[Text]] = None, tail: return os.path.abspath(self._output_dir), timeout_pods, error_pods @staticmethod - def _write_log(kubectl: KubectlInterface, pod: KubernetesResource, tail: int, output_dir: Text, noparse: bool) -> \ - Optional[Tuple[Text, Text]]: + def _write_log(kubectl: KubectlInterface, pod: KubernetesResource, tail: int, output_dir: Text, + noparse: bool = False) -> Optional[Tuple[Text, Text]]: """ Internal method used for gathering the status and log for each container in the provided pod and writing the gathered information to an on-disk log file. @@ -177,8 +177,11 @@ def _write_log(kubectl: KubectlInterface, pod: KubernetesResource, tail: int, ou # get the containerStatuses for this pod container_statuses: List[Dict] = pod.get_status_value(KubernetesResource.Keys.CONTAINER_STATUSES) + # create tuple to hold information about failed containers/pods + err_info: Optional[Tuple[Text, Text]] = None + # for each container status in the list, gather status and print values and log into file - for container_status_dict in container_statuses: + for container_status_dict in container_statuses or []: # create object to get container status values container_status = _ContainerStatus(container_status_dict) @@ -211,9 +214,6 @@ def _write_log(kubectl: KubectlInterface, pod: KubernetesResource, tail: int, ou # close the status header output_file.writelines(("#" * 50) + "\n\n") - # create tuple to hold information about failed containers/pods - err_info: Optional[Tuple[Text, Text]] = None - # call kubectl to get the log for this container try: log: List[AnyStr] = kubectl.logs(pod_name=f"{pod.get_name()} {container_status.get_name()}", diff --git a/download_pod_logs/test/test_download_pod_logs.py b/download_pod_logs/test/test_download_pod_logs.py index 753d671..68548a7 100644 --- a/download_pod_logs/test/test_download_pod_logs.py +++ b/download_pod_logs/test/test_download_pod_logs.py @@ -13,7 +13,9 @@ import pytest from download_pod_logs.download_pod_logs import DownloadPodLogsCommand, main - +from download_pod_logs.model import PodLogDownloader +from viya_ark_library.k8s.sas_k8s_objects import KubernetesResource +from viya_ark_library.k8s.test_impl.sas_kubectl_test import KubectlTest #################################################################### # There is no unit test defined for: ### @@ -22,6 +24,7 @@ # functionality. ### #################################################################### + def test_download_pod_logs_command_command_name(): # create command instance cmd = DownloadPodLogsCommand() @@ -38,6 +41,75 @@ def test_download_pod_logs_command_command_desc(): assert cmd.command_desc() == "Download log files for all or a select list of pods." +def test_write_log_container_status_none() -> None: + """ + This test verifies that an error is not raised when the _write_log() method is passed a pod with no + container_status dictionary. + """ + pod_definition = """ + { + "apiVersion": "v1", + "kind": "Pod", + "metadata": { + "annotations": { + "sas.com/component-name": "sas-annotations", + "sas.com/component-version": "2.2.25-20200506.1588775452057", + "sas.com/version": "2.2.25" + }, + "creationTimestamp": "2020-05-09T00:16:45Z", + "generateName": "sas-annotations-58db55fd65-", + "labels": { + "app": "sas-annotations", + "app.kubernetes.io/name": "sas-annotations", + "pod-template-hash": "58db55fd65", + "sas.com/deployment": "sas-viya" + }, + "name": "sas-annotations-58db55fd65-l2jrw", + "namespace": "test", + "resourceVersion": "11419232", + "selfLink": "/api/v1/namespaces/test/pods/sas-annotations-58db55fd65-l2jrw", + "uid": "b0e2b9c6-58f9-4c81-94b9-8ea2be06a7c1" + }, + "spec": { + "containers": [], + "dnsPolicy": "ClusterFirst", + "enableServiceLinks": true, + "imagePullSecrets": [], + "nodeName": "k8s-master-node.test.sas.com", + "priority": 0, + "restartPolicy": "Always", + "schedulerName": "default-scheduler", + "securityContext": {}, + "serviceAccount": "default", + "serviceAccountName": "default", + "terminationGracePeriodSeconds": 30, + "tolerations": [], + "volumes": [] + }, + "status": { + "conditions": [], + "hostIP": "10.104.215.7", + "phase": "Running", + "podIP": "0.0.0.0", + "podIPs": [ + { + "ip": "0.0.0.0" + } + ], + "qosClass": "Burstable", + "startTime": "2020-05-09T00:16:45Z" + } + } + """ + + # create the Kubernetes resource object + pod: KubernetesResource = KubernetesResource(pod_definition) + + err_info = PodLogDownloader._write_log(kubectl=KubectlTest(), pod=pod, tail=1, + output_dir="./no_container_status_test") + assert err_info is None + + #################################################################### # There are no complete units test defined for: ### # main() ### @@ -45,6 +117,7 @@ def test_download_pod_logs_command_command_desc(): # functionality. ### #################################################################### + def test_usage(capfd) -> None: """ Tests that the usage message is printed as expected. From bfb911ec6e0456f4c1b69c9b472fcb380ae171ba Mon Sep 17 00:00:00 2001 From: Latha Sivakumar Date: Mon, 19 Oct 2020 12:58:26 -0400 Subject: [PATCH 13/31] (#34) istio not supported, remove from checker tool --- pre_install_report/README.md | 2 ++ .../library/pre_install_check_permissions.py | 9 ++++---- pre_install_report/pre_install_report.py | 22 ++++++------------- 3 files changed, 13 insertions(+), 20 deletions(-) diff --git a/pre_install_report/README.md b/pre_install_report/README.md index 03a17e5..5c3f5fd 100644 --- a/pre_install_report/README.md +++ b/pre_install_report/README.md @@ -29,6 +29,8 @@ The following command provides usage details: python viya-ark.py pre-install-report -h ``` +**Note:** The tool currently expects an nginx ingress controller. Other ingress controllers will not be evaluated. + ## Report Output The tool generates the pre-install check report,`viya_pre_install_report_.html`. The report is in a web-viewable, HTML format. diff --git a/pre_install_report/library/pre_install_check_permissions.py b/pre_install_report/library/pre_install_check_permissions.py index df68dc3..9181d16 100644 --- a/pre_install_report/library/pre_install_check_permissions.py +++ b/pre_install_report/library/pre_install_check_permissions.py @@ -64,8 +64,8 @@ def __init__(self, params): self.ingress_data = {} self.ingress_data[viya_constants.INGRESS_CONTROLLER] = self.ingress_controller self.ingress_file = "hello-ingress.yaml" - if self.ingress_controller == viya_constants.INGRESS_ISTIO: - self.ingress_file = "helloworld-gateway.yaml" + # if self.ingress_controller == viya_constants.INGRESS_ISTIO: + # self.ingress_file = "helloworld-gateway.yaml" self._storage_class_sc: List[KubernetesResource] = None def _set_results_cluster_admin(self, resource_key, rc): @@ -315,10 +315,9 @@ def check_sample_service(self): def check_sample_ingress(self): """ - Deploy Kubernetes Ingress or Gateway and Virtual Service for hello-world appliction in specified + Deploy Kubernetes Ingress for hello-world appliction in the specified namespace and set the permissions status in the namespace_admin_permission_data dict object. - If nginx is ingress controller check Ingress deployment. If istio is ingress controller check Gateway - and Virtual Service deployment + If nginx is ingress controller check Ingress deployment (default) """ rc = self.utils.deploy_manifest_file(viya_constants.KUBECTL_APPLY, diff --git a/pre_install_report/pre_install_report.py b/pre_install_report/pre_install_report.py index 3c5c091..a3a0699 100644 --- a/pre_install_report/pre_install_report.py +++ b/pre_install_report/pre_install_report.py @@ -120,12 +120,11 @@ def usage(exit_code: int): :param exit_code: The exit code to return when exiting the program. """ print() - print("Usage: viya-ark.py pre_install_report <-i|--ingress> <-H|--host> <-p|--port> []") + print("Usage: viya-ark.py pre_install_report <-H|--host> <-p|--port> []") print() print("Options:") - print(" -i --ingress=nginx or istio (Required)Kubernetes ingress controller used for Viya deployment") - print(" -H --host (Required)Ingress host used for Viya deployment") - print(" -p --port=xxxxx or \"\" (Required)Ingress port used for Viya deployment") + print(" -H --host (Required)Ingress host for nginx used for Viya deployment") + print(" -p --port=xxxxx or \"\" (Required)Ingress port for nginx used for Viya deployment") print(" -h --help (Optional)Show this usage message") print(" -n --namespace (Optional)Kubernetes namespace used for Viya deployment") print(" -o, --output-dir=\"\" (Optional)Write the report and log files to the provided directory") @@ -146,13 +145,12 @@ def main(argv): :param argv: The parameters passed to the script at execution. """ try: - opts, args = getopt.getopt(argv, "i:H:p:hn:o:d", - ["ingress=", "host=", "port=", "help", "namespace=", "output-dir=", "debug"]) + opts, args = getopt.getopt(argv, "H:p:hn:o:d", + ["host=", "port=", "help", "namespace=", "output-dir=", "debug"]) except getopt.GetoptError as opt_error: print(viya_messages.EXCEPTION_MESSAGE.format(opt_error)) usage(viya_messages.BAD_OPT_RC_) - found_ingress_controller: bool = False found_ingress_host: bool = False found_ingress_port: bool = False output_dir: Optional[Text] = "" @@ -167,9 +165,6 @@ def main(argv): logging_level = logging.DEBUG elif opt in ('-n', '--namespace'): name_space = arg - elif opt in ('-i', '--ingress'): - ingress_controller = arg - found_ingress_controller = True elif opt in ('-H', '--host'): ingress_host = arg found_ingress_host = True @@ -183,14 +178,11 @@ def main(argv): print(viya_messages.OPTION_ERROR.format(str(opt))) usage(viya_messages.BAD_OPT_RC_) - if not found_ingress_controller or not found_ingress_host or not found_ingress_port: + if not found_ingress_host or not found_ingress_port: print(viya_messages.OPTION_VALUES_ERROR) usage(viya_messages.BAD_OPT_RC_) - if not(str(ingress_controller) == viya_constants.INGRESS_NGINX - or str(ingress_controller) == viya_constants.INGRESS_ISTIO): - print(viya_messages.INGRESS_CONTROLLER_ERROR) - usage(viya_messages.BAD_OPT_RC_) + ingress_controller = viya_constants.INGRESS_NGINX # make sure path is valid # if output_dir != "": From 1e5c1b9a4621e61e7cdfb8b24e992b8de4c0100b Mon Sep 17 00:00:00 2001 From: Amy Ho Date: Mon, 19 Oct 2020 14:57:38 -0500 Subject: [PATCH 14/31] (issue-32) download-pod-logs TypeError: 'NoneType' object is not iterable --- download_pod_logs/model.py | 10 +-- .../test/data/pod_with_no_status.txt | 53 ++++++++++++++++ .../test/test_download_pod_logs.py | 62 ++----------------- 3 files changed, 64 insertions(+), 61 deletions(-) create mode 100644 download_pod_logs/test/data/pod_with_no_status.txt diff --git a/download_pod_logs/model.py b/download_pod_logs/model.py index c9dfc40..d6770f7 100644 --- a/download_pod_logs/model.py +++ b/download_pod_logs/model.py @@ -159,7 +159,7 @@ def download_logs(self, selected_components: Optional[List[Text]] = None, tail: @staticmethod def _write_log(kubectl: KubectlInterface, pod: KubernetesResource, tail: int, output_dir: Text, - noparse: bool = False) -> Optional[Tuple[Text, Text]]: + noparse: bool = False) -> List[Tuple[Optional[Text], Text]]: """ Internal method used for gathering the status and log for each container in the provided pod and writing the gathered information to an on-disk log file. @@ -177,8 +177,8 @@ def _write_log(kubectl: KubectlInterface, pod: KubernetesResource, tail: int, ou # get the containerStatuses for this pod container_statuses: List[Dict] = pod.get_status_value(KubernetesResource.Keys.CONTAINER_STATUSES) - # create tuple to hold information about failed containers/pods - err_info: Optional[Tuple[Text, Text]] = None + # create list of tuples to hold information about failed containers/pods + err_info: List[Tuple[Optional[Text], Text]] = list() # for each container status in the list, gather status and print values and log into file for container_status_dict in container_statuses or []: @@ -188,7 +188,7 @@ def _write_log(kubectl: KubectlInterface, pod: KubernetesResource, tail: int, ou # build the output file name # if there are no containers to report on, return if len(container_statuses) == 0: - return + return [] # if there is only one container, only include the pod name in the log file name elif len(container_statuses) == 1: output_file_path = f"{output_dir}{os.sep}{pod.get_name()}.log" @@ -223,7 +223,7 @@ def _write_log(kubectl: KubectlInterface, pod: KubernetesResource, tail: int, ou err_msg = (f"ERROR: A log could not be retrieved for the container [{container_status.get_name()}] " f"in pod [{pod.get_name()}] in namespace [{kubectl.get_namespace()}]") log: List[AnyStr] = [err_msg] - err_info = (container_status.get_name(), pod.get_name()) + err_info.append([container_status.get_name(), pod.get_name()]) # parse any structured logging if noparse: diff --git a/download_pod_logs/test/data/pod_with_no_status.txt b/download_pod_logs/test/data/pod_with_no_status.txt new file mode 100644 index 0000000..1cf0f8e --- /dev/null +++ b/download_pod_logs/test/data/pod_with_no_status.txt @@ -0,0 +1,53 @@ +{ + "apiVersion": "v1", + "kind": "Pod", + "metadata": { + "annotations": { + "sas.com/component-name": "sas-annotations", + "sas.com/component-version": "2.2.25-20200506.1588775452057", + "sas.com/version": "2.2.25" + }, + "creationTimestamp": "2020-05-09T00:16:45Z", + "generateName": "sas-annotations-58db55fd65-", + "labels": { + "app": "sas-annotations", + "app.kubernetes.io/name": "sas-annotations", + "pod-template-hash": "58db55fd65", + "sas.com/deployment": "sas-viya" + }, + "name": "sas-annotations-58db55fd65-l2jrw", + "namespace": "test", + "resourceVersion": "11419232", + "selfLink": "/api/v1/namespaces/test/pods/sas-annotations-58db55fd65-l2jrw", + "uid": "b0e2b9c6-58f9-4c81-94b9-8ea2be06a7c1" + }, + "spec": { + "containers": [], + "dnsPolicy": "ClusterFirst", + "enableServiceLinks": true, + "imagePullSecrets": [], + "nodeName": "k8s-master-node.test.sas.com", + "priority": 0, + "restartPolicy": "Always", + "schedulerName": "default-scheduler", + "securityContext": {}, + "serviceAccount": "default", + "serviceAccountName": "default", + "terminationGracePeriodSeconds": 30, + "tolerations": [], + "volumes": [] + }, + "status": { + "conditions": [], + "hostIP": "10.104.215.7", + "phase": "Running", + "podIP": "0.0.0.0", + "podIPs": [ + { + "ip": "0.0.0.0" + } + ], + "qosClass": "Burstable", + "startTime": "2020-05-09T00:16:45Z" + } +} diff --git a/download_pod_logs/test/test_download_pod_logs.py b/download_pod_logs/test/test_download_pod_logs.py index 68548a7..c932496 100644 --- a/download_pod_logs/test/test_download_pod_logs.py +++ b/download_pod_logs/test/test_download_pod_logs.py @@ -46,68 +46,18 @@ def test_write_log_container_status_none() -> None: This test verifies that an error is not raised when the _write_log() method is passed a pod with no container_status dictionary. """ - pod_definition = """ - { - "apiVersion": "v1", - "kind": "Pod", - "metadata": { - "annotations": { - "sas.com/component-name": "sas-annotations", - "sas.com/component-version": "2.2.25-20200506.1588775452057", - "sas.com/version": "2.2.25" - }, - "creationTimestamp": "2020-05-09T00:16:45Z", - "generateName": "sas-annotations-58db55fd65-", - "labels": { - "app": "sas-annotations", - "app.kubernetes.io/name": "sas-annotations", - "pod-template-hash": "58db55fd65", - "sas.com/deployment": "sas-viya" - }, - "name": "sas-annotations-58db55fd65-l2jrw", - "namespace": "test", - "resourceVersion": "11419232", - "selfLink": "/api/v1/namespaces/test/pods/sas-annotations-58db55fd65-l2jrw", - "uid": "b0e2b9c6-58f9-4c81-94b9-8ea2be06a7c1" - }, - "spec": { - "containers": [], - "dnsPolicy": "ClusterFirst", - "enableServiceLinks": true, - "imagePullSecrets": [], - "nodeName": "k8s-master-node.test.sas.com", - "priority": 0, - "restartPolicy": "Always", - "schedulerName": "default-scheduler", - "securityContext": {}, - "serviceAccount": "default", - "serviceAccountName": "default", - "terminationGracePeriodSeconds": 30, - "tolerations": [], - "volumes": [] - }, - "status": { - "conditions": [], - "hostIP": "10.104.215.7", - "phase": "Running", - "podIP": "0.0.0.0", - "podIPs": [ - { - "ip": "0.0.0.0" - } - ], - "qosClass": "Burstable", - "startTime": "2020-05-09T00:16:45Z" - } - } - """ + current_dir = os.path.dirname(os.path.abspath(__file__)) + poddata = os.path.join(current_dir, f"data{os.sep}pod_with_no_status.txt") + + with open(poddata) as f: + pod_definition = f.read() # create the Kubernetes resource object pod: KubernetesResource = KubernetesResource(pod_definition) err_info = PodLogDownloader._write_log(kubectl=KubectlTest(), pod=pod, tail=1, output_dir="./no_container_status_test") - assert err_info is None + assert not err_info #################################################################### From d9e41bddadfec4e65207f4908ebd1ea08e90912b Mon Sep 17 00:00:00 2001 From: Latha Sivakumar Date: Mon, 19 Oct 2020 16:01:15 -0400 Subject: [PATCH 15/31] (#34) istio not supported, remove from checker tool --- .../library/utils/viya_messages.py | 2 +- pre_install_report/pre_install_report.py | 36 ++++++++++++------- 2 files changed, 25 insertions(+), 13 deletions(-) diff --git a/pre_install_report/library/utils/viya_messages.py b/pre_install_report/library/utils/viya_messages.py index 800e990..38aa4e9 100644 --- a/pre_install_report/library/utils/viya_messages.py +++ b/pre_install_report/library/utils/viya_messages.py @@ -24,7 +24,7 @@ 'specified in the /viya4-ark/pre_install_report/viya_set_limit.py file' OPTION_ERROR = "ERROR: option {} not recognized" OPTION_VALUES_ERROR = "ERROR: Provide valid values for all required options. Check options -i, -p and -H." -INGRESS_CONTROLLER_ERROR = "ERROR: Ingress controller specified must be nginx or istio. Check value on option -i " +INGRESS_CONTROLLER_ERROR = "ERROR: Ingress controller specified must be nginx. Check value on option -i " OUPUT_PATH_ERROR = "ERROR: The report output path is not valid {}. Check value on option -o " EXCEPTION_MESSAGE = "ERROR: {}" diff --git a/pre_install_report/pre_install_report.py b/pre_install_report/pre_install_report.py index a3a0699..9e4709a 100644 --- a/pre_install_report/pre_install_report.py +++ b/pre_install_report/pre_install_report.py @@ -120,11 +120,12 @@ def usage(exit_code: int): :param exit_code: The exit code to return when exiting the program. """ print() - print("Usage: viya-ark.py pre_install_report <-H|--host> <-p|--port> []") + print("Usage: viya-ark.py pre_install_report <-i|--ingress> <-H|--host> <-p|--port> []") print() print("Options:") - print(" -H --host (Required)Ingress host for nginx used for Viya deployment") - print(" -p --port=xxxxx or \"\" (Required)Ingress port for nginx used for Viya deployment") + print(" -i --ingress=nginx (Required)Kubernetes ingress controller used for Viya deployment") + print(" -H --host (Required)Ingress host used for Viya deployment") + print(" -p --port=xxxxx or \"\" (Required)Ingress port used for Viya deployment") print(" -h --help (Optional)Show this usage message") print(" -n --namespace (Optional)Kubernetes namespace used for Viya deployment") print(" -o, --output-dir=\"\" (Optional)Write the report and log files to the provided directory") @@ -145,12 +146,13 @@ def main(argv): :param argv: The parameters passed to the script at execution. """ try: - opts, args = getopt.getopt(argv, "H:p:hn:o:d", - ["host=", "port=", "help", "namespace=", "output-dir=", "debug"]) + opts, args = getopt.getopt(argv, "i:H:p:hn:o:d", + ["ingress=", "host=", "port=", "help", "namespace=", "output-dir=", "debug"]) except getopt.GetoptError as opt_error: print(viya_messages.EXCEPTION_MESSAGE.format(opt_error)) usage(viya_messages.BAD_OPT_RC_) + found_ingress_controller: bool = False found_ingress_host: bool = False found_ingress_port: bool = False output_dir: Optional[Text] = "" @@ -165,6 +167,9 @@ def main(argv): logging_level = logging.DEBUG elif opt in ('-n', '--namespace'): name_space = arg + elif opt in ('-i', '--ingress'): + ingress_controller = arg + found_ingress_controller = True elif opt in ('-H', '--host'): ingress_host = arg found_ingress_host = True @@ -178,13 +183,7 @@ def main(argv): print(viya_messages.OPTION_ERROR.format(str(opt))) usage(viya_messages.BAD_OPT_RC_) - if not found_ingress_host or not found_ingress_port: - print(viya_messages.OPTION_VALUES_ERROR) - usage(viya_messages.BAD_OPT_RC_) - - ingress_controller = viya_constants.INGRESS_NGINX - - # make sure path is valid # + # make sure path is valid and set up logging# if output_dir != "": if not output_dir.endswith(os.sep): output_dir = output_dir + os.sep @@ -201,16 +200,29 @@ def main(argv): usage(viya_messages.BAD_OPT_RC_) sas_logger = ViyaARKLogger(report_log_path, logging_level=logging_level, logger_name="pre_install_logger") + logger = sas_logger.get_logger() _read_environment_var('KUBECONFIG') + if not found_ingress_controller or not found_ingress_host or not found_ingress_port: + logger.error(viya_messages.OPTION_VALUES_ERROR) + print(viya_messages.OPTION_VALUES_ERROR) + usage(viya_messages.BAD_OPT_RC_) + + if not(str(ingress_controller) == viya_constants.INGRESS_NGINX): + logger.error(viya_messages.INGRESS_CONTROLLER_ERROR) + print(viya_messages.INGRESS_CONTROLLER_ERROR) + usage(viya_messages.BAD_OPT_RC_) + try: kubectl = Kubectl(namespace=name_space) except ConnectionError as e: + logger.error(viya_messages.EXCEPTION_MESSAGE.format(e)) print() print(viya_messages.EXCEPTION_MESSAGE.format(e)) print() sys.exit(viya_messages.CONNECTION_ERROR_RC_) except NamespaceNotFoundError as e: + logger.error(viya_messages.EXCEPTION_MESSAGE.format(e)) print() print(viya_messages.EXCEPTION_MESSAGE.format(e)) print() From 0690f92b7222cf27bf9381707a41c206d651ff25 Mon Sep 17 00:00:00 2001 From: Amy Ho Date: Mon, 19 Oct 2020 15:41:11 -0500 Subject: [PATCH 16/31] (issue-32) download-pod-logs TypeError: 'NoneType' object is not iterable --- download_pod_logs/model.py | 16 ++++++++-------- download_pod_logs/test/test_download_pod_logs.py | 2 +- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/download_pod_logs/model.py b/download_pod_logs/model.py index d6770f7..24622f6 100644 --- a/download_pod_logs/model.py +++ b/download_pod_logs/model.py @@ -147,11 +147,10 @@ def download_logs(self, selected_components: Optional[List[Text]] = None, tail: error_pods: List[Tuple[Text, Text]] = list() for process in write_log_processes: try: - err_info: Optional[Tuple[Text, Text]] = process.get_process().get(timeout=self._wait) + err_info: List[Tuple[Optional[Text], Text]] = process.get_process().get(timeout=self._wait) # add the error message, if returned - if err_info: - error_pods.append(err_info) + error_pods.extend(err_info) except TimeoutError: timeout_pods.append(process.get_pod_name()) @@ -180,17 +179,18 @@ def _write_log(kubectl: KubectlInterface, pod: KubernetesResource, tail: int, ou # create list of tuples to hold information about failed containers/pods err_info: List[Tuple[Optional[Text], Text]] = list() + if not container_statuses: + err_info.append((None, pod.get_name())) + return err_info + # for each container status in the list, gather status and print values and log into file - for container_status_dict in container_statuses or []: + for container_status_dict in container_statuses: # create object to get container status values container_status = _ContainerStatus(container_status_dict) # build the output file name - # if there are no containers to report on, return - if len(container_statuses) == 0: - return [] # if there is only one container, only include the pod name in the log file name - elif len(container_statuses) == 1: + if len(container_statuses) == 1: output_file_path = f"{output_dir}{os.sep}{pod.get_name()}.log" # if there is more than one container, include both the pod and container name in the log file name else: diff --git a/download_pod_logs/test/test_download_pod_logs.py b/download_pod_logs/test/test_download_pod_logs.py index c932496..8f1089b 100644 --- a/download_pod_logs/test/test_download_pod_logs.py +++ b/download_pod_logs/test/test_download_pod_logs.py @@ -57,7 +57,7 @@ def test_write_log_container_status_none() -> None: err_info = PodLogDownloader._write_log(kubectl=KubectlTest(), pod=pod, tail=1, output_dir="./no_container_status_test") - assert not err_info + assert not err_info[0][0] #################################################################### From 6f1e613cd73751915d3cd200d823e3188385490a Mon Sep 17 00:00:00 2001 From: Latha Sivakumar Date: Mon, 19 Oct 2020 16:45:00 -0400 Subject: [PATCH 17/31] (#34) istio not supported, remove from checker tool --- pre_install_report/library/pre_install_check_permissions.py | 2 -- 1 file changed, 2 deletions(-) diff --git a/pre_install_report/library/pre_install_check_permissions.py b/pre_install_report/library/pre_install_check_permissions.py index 9181d16..95d0690 100644 --- a/pre_install_report/library/pre_install_check_permissions.py +++ b/pre_install_report/library/pre_install_check_permissions.py @@ -64,8 +64,6 @@ def __init__(self, params): self.ingress_data = {} self.ingress_data[viya_constants.INGRESS_CONTROLLER] = self.ingress_controller self.ingress_file = "hello-ingress.yaml" - # if self.ingress_controller == viya_constants.INGRESS_ISTIO: - # self.ingress_file = "helloworld-gateway.yaml" self._storage_class_sc: List[KubernetesResource] = None def _set_results_cluster_admin(self, resource_key, rc): From a94d31b557d53347824f380b562dec0baa5da4eb Mon Sep 17 00:00:00 2001 From: Amy Ho Date: Mon, 19 Oct 2020 21:18:40 -0400 Subject: [PATCH 18/31] (issue-32) download-pod-logs TypeError: 'NoneType' object is not iterable --- download_pod_logs/download_pod_logs.py | 5 ++++- download_pod_logs/model.py | 3 ++- 2 files changed, 6 insertions(+), 2 deletions(-) diff --git a/download_pod_logs/download_pod_logs.py b/download_pod_logs/download_pod_logs.py index 4de0caa..26a09ac 100644 --- a/download_pod_logs/download_pod_logs.py +++ b/download_pod_logs/download_pod_logs.py @@ -151,7 +151,10 @@ def main(argv: List): # print the containers that had an error for err_info in error_pods: - print(f" [{err_info[0]}] in pod [{err_info[1]}]", file=sys.stderr) + if err_info[0]: + print(f" [{err_info[0]}] in pod [{err_info[1]}]", file=sys.stderr) + else: + print(f" All containers in pod [{err_info[1]}]", file=sys.stderr) print("\nContainer status information is available in the log file.", file=sys.stderr) diff --git a/download_pod_logs/model.py b/download_pod_logs/model.py index 24622f6..4f034c6 100644 --- a/download_pod_logs/model.py +++ b/download_pod_logs/model.py @@ -150,7 +150,8 @@ def download_logs(self, selected_components: Optional[List[Text]] = None, tail: err_info: List[Tuple[Optional[Text], Text]] = process.get_process().get(timeout=self._wait) # add the error message, if returned - error_pods.extend(err_info) + if err_info: + error_pods.extend(err_info) except TimeoutError: timeout_pods.append(process.get_pod_name()) From 8ed0a7289de19a60cc7ba6c161e987fe20d192f1 Mon Sep 17 00:00:00 2001 From: Amy Ho Date: Tue, 20 Oct 2020 10:55:39 -0400 Subject: [PATCH 19/31] (issue-32) download-pod-logs TypeError: 'NoneType' object is not iterable --- download_pod_logs/model.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/download_pod_logs/model.py b/download_pod_logs/model.py index 4f034c6..9bdceef 100644 --- a/download_pod_logs/model.py +++ b/download_pod_logs/model.py @@ -224,7 +224,7 @@ def _write_log(kubectl: KubectlInterface, pod: KubernetesResource, tail: int, ou err_msg = (f"ERROR: A log could not be retrieved for the container [{container_status.get_name()}] " f"in pod [{pod.get_name()}] in namespace [{kubectl.get_namespace()}]") log: List[AnyStr] = [err_msg] - err_info.append([container_status.get_name(), pod.get_name()]) + err_info.append((container_status.get_name(), pod.get_name())) # parse any structured logging if noparse: From 0b5cb18f2dcf2ebb0ae7668b7e327cb19819f0f8 Mon Sep 17 00:00:00 2001 From: Latha Sivakumar Date: Thu, 22 Oct 2020 18:16:57 -0400 Subject: [PATCH 20/31] =?UTF-8?q?(#38)=20Show=20the=20Calculated=20aggrega?= =?UTF-8?q?te=20=E2=80=9CAllocatable=20Memory=E2=80=9D=20in=20=E2=80=9CGB?= =?UTF-8?q?=E2=80=9D=20on=20the=20Summary=20line?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../library/pre_install_check.py | 14 +++++++---- .../test/test_pre_install_report.py | 23 +++++++++++-------- 2 files changed, 24 insertions(+), 13 deletions(-) diff --git a/pre_install_report/library/pre_install_check.py b/pre_install_report/library/pre_install_check.py index ba0913d..8a492dc 100644 --- a/pre_install_report/library/pre_install_check.py +++ b/pre_install_report/library/pre_install_check.py @@ -71,6 +71,7 @@ def __init__(self, sas_logger: ViyaARKLogger, viya_kubelet_version_min, viya_min self._viya_min_aggregate_worker_CPU_cores: Text = viya_min_aggregate_worker_CPU_cores self._viya_min_allocatable_worker_memory: Text = viya_min_allocatable_worker_memory self._viya_min_aggregate_worker_memory: Text = viya_min_aggregate_worker_memory + self._calculated_aggregate_allocatable_memory = None self._workers = 0 def _parse_release_info(self, release_info): @@ -539,7 +540,7 @@ def _check_cpu_errors(self, global_data, total_alloc_cpu_cores: float, aggregate error_msg = "" info_msg = str(viya_constants.EXPECTED) + ': ' + \ self._viya_min_aggregate_worker_CPU_cores + \ - ', Calculated: ' + str(total_alloc_cpu_cores) + ', Calculated: ' + str(round(total_alloc_cpu_cores, 2)) min_aggr_worker_cpu_core = self._get_cpu(self._viya_min_aggregate_worker_CPU_cores, "VIYA_MIN_AGGREGATE_WORKER_CPU_CORES") @@ -548,7 +549,7 @@ def _check_cpu_errors(self, global_data, total_alloc_cpu_cores: float, aggregate aggregate_cpu_failures += 1 # Check for combined cpu_core capacity of the Kubernetes nodes in cluster error_msg = viya_constants.SET + ': ' + \ - str(total_alloc_cpu_cores) + ', ' + \ + str(round(total_alloc_cpu_cores, 2)) + ', ' + \ str(viya_constants.EXPECTED) + ': ' + \ self._viya_min_aggregate_worker_CPU_cores if aggregate_cpu_failures > 0: @@ -572,9 +573,11 @@ def _check_memory_errors(self, global_data, total_allocatable_memory, quantity_, aggregate_memory_data = {} error_msg = "" msg = '' + total_allocatable_memory_toG = total_allocatable_memory.to('G') info_msg = str(viya_constants.EXPECTED) + ': ' + \ self._viya_min_aggregate_worker_memory + \ - ', Calculated: ' + str(total_allocatable_memory.to('Gi')) + ', Calculated: ' + str(round(total_allocatable_memory_toG, 2)) + self._calculated_aggregate_allocatable_memory = total_allocatable_memory.to('Gi') min_aggr_worker_memory = self._get_memory(self._viya_min_aggregate_worker_memory, "VIYA_MIN_AGGREGATE_WORKER_MEMORY", quantity_) @@ -583,7 +586,7 @@ def _check_memory_errors(self, global_data, total_allocatable_memory, quantity_, aggregate_memory_failures += 1 # Check for combined cpu_core capacity of the Kubernetes nodes in cluster error_msg = viya_constants.SET + ': ' + \ - str(total_allocatable_memory.to('Gi')) + ', ' + \ + str(round(total_allocatable_memory_toG, 2)) + ', ' + \ str(viya_constants.EXPECTED) + ': ' + \ self._viya_min_aggregate_worker_memory @@ -981,6 +984,9 @@ def get_config_info(self): self.logger.debug("configs_data {}".format(configs_data)) return configs_data + def get_calculated_aggregate_memory(self): + return self._calculated_aggregate_allocatable_memory + def generate_report(self, global_data, master_data, diff --git a/pre_install_report/test/test_pre_install_report.py b/pre_install_report/test/test_pre_install_report.py index 9fa73b9..62453f2 100644 --- a/pre_install_report/test/test_pre_install_report.py +++ b/pre_install_report/test/test_pre_install_report.py @@ -178,9 +178,10 @@ def test_get_nested_nodes_info(): assert global_data[0]['totalWorkers'] in '3: Current: 3, Expected: Minimum 1' assert global_data[2]['aggregate_cpu_failures'] in 'Current: 10.1, Expected: 12, Issues Found: 2' - assert global_data[3]['aggregate_memory_failures'] in 'Current: 62.3276481628418 Gi, Expected: 56G,' \ + assert global_data[3]['aggregate_memory_failures'] in 'Current: 66.92 G, Expected: 56G,' \ ' Issues Found: 1' - + total_allocatable_memoryG = vpc.get_calculated_aggregate_memory() # quantity_("62.3276481628418 Gi").to('G') + assert str(round(total_allocatable_memoryG.to("G"), 2)) == '66.92 G' assert global_data[4]['aggregate_kubelet_failures'] in '1, Check Kubelet Version on nodes. Issues Found: 1' template_render(global_data, configs_data, storage_data, 'nested_nodes_info.html') @@ -219,9 +220,11 @@ def test_get_nested_millicores_nodes_info(): assert global_data[0]['totalWorkers'] in '3: Current: 3, Expected: Minimum 1' assert global_data[2]['aggregate_cpu_failures'] in 'Current: 2.5, Expected: 20, Issues Found: 3' - assert global_data[3]['aggregate_memory_failures'] in 'Current: 62.3276481628418 Gi, Expected: 156G,' \ + assert global_data[3]['aggregate_memory_failures'] in 'Current: 66.92 G, Expected: 156G,' \ ' Issues Found: 2' + total_allocatable_memoryG = vpc.get_calculated_aggregate_memory() + assert str(round(total_allocatable_memoryG.to("G"), 2)) == '66.92 G' assert global_data[4]['aggregate_kubelet_failures'] in '2, Check Kubelet Version on nodes. Issues Found: 2' template_render(global_data, configs_data, storage_data, 'nested_millicores_nodes_info.html') @@ -251,8 +254,10 @@ def test_ranchersingle_get_nested_nodes_info(): pprint.pprint(global_data) for nodes in global_data: assert global_data[2]['aggregate_cpu_failures'] in 'Current: 8.0, Expected: 12, Issues Found: 1' - assert global_data[3]['aggregate_memory_failures'] in 'Expected: 56G, Calculated: 62.75947570800781 Gi, ' \ + assert global_data[3]['aggregate_memory_failures'] in 'Expected: 56G, Calculated: 67.39 G, ' \ 'Issues Found: 0' + total_allocatable_memoryG = vpc.get_calculated_aggregate_memory() + assert str(round(total_allocatable_memoryG.to("G"), 2)) == '67.39 G' assert global_data[4]['aggregate_kubelet_failures'] in 'Check Kubelet Version on nodes. Issues Found: 0' template_render(global_data, configs_data, storage_data, 'ranchersingle_nested_nodes_info.html') @@ -283,7 +288,7 @@ def test_ranchermulti_get_nested_nodes_info(): for nodes in global_data: assert global_data[2]['aggregate_cpu_failures'] in 'Expected: 12, Calculated: 40.0, Issues Found: 0' - assert global_data[3]['aggregate_memory_failures'] in 'Expected: 56G, Calculated: 313.30909729003906 Gi, ' \ + assert global_data[3]['aggregate_memory_failures'] in 'Expected: 56G, Calculated: 336.41 G, ' \ 'Issues Found: 0' assert global_data[4]['aggregate_kubelet_failures'] in 'Check Kubelet Version on nodes. Issues Found: 0' @@ -421,7 +426,7 @@ def test_azure_terrform_multi_nodes_info(): for nodes in global_data: assert global_data[2]['aggregate_cpu_failures'] in 'Expected: 12, Calculated: 39.1, Issues Found: 0' - assert global_data[3]['aggregate_memory_failures'] in 'Expected: 56G, Calculated: 135.50825119018555 Gi,' \ + assert global_data[3]['aggregate_memory_failures'] in 'Expected: 56G, Calculated: 145.5 G,' \ ' Issues Found: 0' assert global_data[4]['aggregate_kubelet_failures'] in 'Check Kubelet Version on nodes. Issues Found: 0' @@ -454,7 +459,7 @@ def test_azure_multi_get_nested_nodes_info(): for nodes in global_data: assert global_data[2]['aggregate_cpu_failures'] in 'Expected: 12, Calculated: 31.28, Issues Found: 0' - assert global_data[3]['aggregate_memory_failures'] in 'Expected: 56G, Calculated: 93.6175537109375 Gi, ' \ + assert global_data[3]['aggregate_memory_failures'] in 'Expected: 56G, Calculated: 100.52 G, ' \ 'Issues Found: 0' assert global_data[4]['aggregate_kubelet_failures'] in '0, Check Kubelet Version on nodes.' @@ -487,8 +492,8 @@ def test_azure_worker_nodes(): pprint.pprint(global_data) for nodes in global_data: assert global_data[2]['aggregate_cpu_failures'] in \ - 'Expected: 12, Calculated: 141.55999999999997, Issues Found: 0' - assert global_data[3]['aggregate_memory_failures'] in 'Expected: 56G, Calculated: 677.8613586425781 Gi, ' \ + 'Expected: 12, Calculated: 141.56, Issues Found: 0' + assert global_data[3]['aggregate_memory_failures'] in 'Expected: 56G, Calculated: 727.85 G, ' \ 'Issues Found: 0' assert global_data[4]['aggregate_kubelet_failures'] in '0, Check Kubelet Version on nodes.' From 959df32d8fa8bae25aa7ebcb387ec9f66ed026a6 Mon Sep 17 00:00:00 2001 From: Latha Sivakumar Date: Fri, 23 Oct 2020 13:32:47 -0400 Subject: [PATCH 21/31] =?UTF-8?q?(#38)=20Show=20the=20Calculated=20aggrega?= =?UTF-8?q?te=20=E2=80=9CAllocatable=20Memory=E2=80=9D=20in=20=E2=80=9CGB?= =?UTF-8?q?=E2=80=9D=20on=20the=20Summary=20line?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../test/test_pre_install_report.py | 26 +++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/pre_install_report/test/test_pre_install_report.py b/pre_install_report/test/test_pre_install_report.py index 62453f2..06b3405 100644 --- a/pre_install_report/test/test_pre_install_report.py +++ b/pre_install_report/test/test_pre_install_report.py @@ -540,6 +540,32 @@ def createViyaPreInstallCheck(viya_kubelet_version_min, return sas_pre_check_report +def test_get_calculated_aggregate_memory(): + + vpc = createViyaPreInstallCheck(viya_kubelet_version_min, + viya_min_worker_allocatable_CPU, + viya_min_aggregate_worker_CPU_cores, + viya_min_allocatable_worker_memory, + viya_min_aggregate_worker_memory) + + current_dir = os.path.dirname(os.path.abspath(__file__)) + datafile = os.path.join(current_dir, 'test_data/json_data/nodes_info.json') + # Register Python Package Pint definitions + quantity_ = register_pint() + with open(datafile) as f: + data = json.load(f) + nodes_data = vpc.get_nested_nodes_info(data, quantity_) + + global_data = [] + + assert vpc.get_calculated_aggregate_memory() is None + cluster_info = "Kubernetes master is running at https://0.0.0.0:6443\n" + global_data = vpc.evaluate_nodes(nodes_data, global_data, cluster_info, quantity_) + + total_allocatable_memoryGi = vpc.get_calculated_aggregate_memory() + assert str(total_allocatable_memoryGi) == '62.3276481628418 Gi' + + def test_check_permissions(): # namespace = 'default' params = {} From b75faf5212b1bd8a4d29b75a645b219d81ca9a6b Mon Sep 17 00:00:00 2001 From: Latha Sivakumar Date: Thu, 29 Oct 2020 16:45:59 -0400 Subject: [PATCH 22/31] (#40) Determin Ingress Host and Port Values --- pre_install_report/README.md | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/pre_install_report/README.md b/pre_install_report/README.md index 5c3f5fd..3f61e6a 100644 --- a/pre_install_report/README.md +++ b/pre_install_report/README.md @@ -31,6 +31,22 @@ python viya-ark.py pre-install-report -h **Note:** The tool currently expects an nginx ingress controller. Other ingress controllers will not be evaluated. +**Hints for Determining the Ingress Host and Port** parameter values when working with Azure, nginx ingress controller and Load Balancer. +* Determine the ingress namespace with kubectl command: `kubectl get namespaces` +* Make a note of the namespace where nginx controller is available +* Determine the ingress controller service name with kubectl command: `kubectl get svc -n ` +* Note the nginx-controller details in the example output shown below: +``` +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +ingress-nginx-controller LoadBalancer 10.0.00.000 55.147.22.101 80:31254/TCP,443:31383/TCP 28d +``` +Use options as shown below: + +``` + -i nginx -H "52.247.32.111" -p "80" + -i nginx -H "52.247.32.111" -p "443" +``` + ## Report Output The tool generates the pre-install check report,`viya_pre_install_report_.html`. The report is in a web-viewable, HTML format. From 2fadb4c5d04f85a8b42b4e2b607ea7d4b03833c2 Mon Sep 17 00:00:00 2001 From: Latha Sivakumar Date: Mon, 2 Nov 2020 12:33:41 -0500 Subject: [PATCH 23/31] (#40) Determin Ingress Host and Port Values --- pre_install_report/README.md | 22 +++++++++++++--------- 1 file changed, 13 insertions(+), 9 deletions(-) diff --git a/pre_install_report/README.md b/pre_install_report/README.md index 3f61e6a..74f2eb7 100644 --- a/pre_install_report/README.md +++ b/pre_install_report/README.md @@ -31,20 +31,24 @@ python viya-ark.py pre-install-report -h **Note:** The tool currently expects an nginx ingress controller. Other ingress controllers will not be evaluated. -**Hints for Determining the Ingress Host and Port** parameter values when working with Azure, nginx ingress controller and Load Balancer. -* Determine the ingress namespace with kubectl command: `kubectl get namespaces` -* Make a note of the namespace where nginx controller is available -* Determine the ingress controller service name with kubectl command: `kubectl get svc -n ` -* Note the nginx-controller details in the example output shown below: +**Hints and Tips:** The Ingress Host and Port parameter values with Azure, nginx ingress controller and Load Balancer +can be determined with a kubectl command. You must specify the namespace where ingress in available as well as the ingress controller name like below: +`kubectl -n get svc ` +* The output from the command will look like the example output shown below: ``` NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer 10.0.00.000 55.147.22.101 80:31254/TCP,443:31383/TCP 28d ``` -Use options as shown below: - +Use commands as shown below to determine the parameter values: +``` +$ export INGRESS_HOST=externalIP=$(kubectl -n get service -o jsonpath='{.status.loadBalancer.ingress[*].ip}') +$ export INGRESS_HTTP_PORT=$(kubectl -n get service -o jsonpath='{.spec.ports[?(@.name=="http")].port}') +$ export INGRESS_HTTPS_PORT=$(kubectl -n get service -o jsonpath='{.spec.ports[?(@.name=="https")].port}') +``` +Use the values gathered on the command line for http or https as appropriate for your deployment: ``` - -i nginx -H "52.247.32.111" -p "80" - -i nginx -H "52.247.32.111" -p "443" + -i nginx -H $INGRESS_HOST -p $INGRESS_HTTP_PORT + -i nginx -H $INGRESS_HOST -p $INGRESS_HTTPS_PORT ``` ## Report Output From 725c97ef941c023c138146f3d971e3c7d864f53f Mon Sep 17 00:00:00 2001 From: Latha Sivakumar Date: Mon, 2 Nov 2020 14:28:39 -0500 Subject: [PATCH 24/31] (#42) Refactor tables in pre-install check report --- .../report_template_viya_pre_install_check.j2 | 72 ++++++++++--------- 1 file changed, 39 insertions(+), 33 deletions(-) diff --git a/pre_install_report/templates/report_template_viya_pre_install_check.j2 b/pre_install_report/templates/report_template_viya_pre_install_check.j2 index 16987d9..3eb55d7 100644 --- a/pre_install_report/templates/report_template_viya_pre_install_check.j2 +++ b/pre_install_report/templates/report_template_viya_pre_install_check.j2 @@ -149,29 +149,29 @@ {% endif %} - {% for node in nodes_data %} + {% if nodes_data|length > 0 %} - {% set check = 'PASS' %} - {% set column_name = '' %} - {% if node.error.cpu %} - {% set check = 'FAIL' %} - {% set column_name = 'CPU' %} - {% endif %} - +
+ + + + - + + {% for node in nodes_data %} + {% endfor %}
Node Name CPUs Cores Allocatable CPU Cores {{ column_name }} Issues
{{node.nodeName}} {{node.cpu}} {{node.allocatablecpu}} {{node.error.cpu}}
- {% endfor %} + {% endif %} {% endif %} @@ -199,21 +199,20 @@ {% endif %} - {% for node in nodes_data %} - {% set check = 'PASS' %} - {% set column_name = '' %} - {% if node.error.kubeletversion %} - {% set check = 'FAIL' %} - {% set column_name = 'Kubelet Version' %} - {% endif %} - + {% if nodes_data|length > 0 %} +
+ + + + - + + {% for node in nodes_data %} @@ -221,8 +220,9 @@ + {% endfor %}
Node Name Kubelet Version Container Runtime Kernel Version {{ column_name }} Issues
{{node.nodeName}} {{node.kubeletversion}}{{node.kernelVersion}} {{node.error.kubeletversion}}
- {% endfor %} + {% endif %} {% endif %} @@ -248,22 +248,23 @@ {% endif %} - {% for node in nodes_data %} - {% set check = 'PASS' %} - {% set column_name = '' %} - {% if node.error.allocatableMemory %} - {% set check = 'FAIL' %} - {% set column_name = 'Memory' %} - {% endif %} - + {% if nodes_data|length > 0 %} +
+ + + + + + - + + {% for node in nodes_data %} @@ -272,8 +273,9 @@ + {% endfor %}
Node Name Memory Allocatable Memory Allocatable Ephemeral Storage Pods Capacity {{ column_name }} Issues
{{node.nodeName}} {{node.memory}}{{node.podscapacity}} {{node.error.allocatableMemory}}
- {% endfor %} + {% endif %} {% endif %} @@ -298,18 +300,22 @@ {% endif %} - {% for node in storage %} - + {% if storage|length > 0 %} +
+ + + {% for node in storage %} + {% endfor %}
Storage Class Status
{{node.storageClassNameName}} {{node.firstFailure}}
- {% endfor %} + {% endif %} {% endif %} From bd25df96e9056db89f0b3deb1a767ab0b2851d2f Mon Sep 17 00:00:00 2001 From: Latha Sivakumar Date: Mon, 2 Nov 2020 15:11:59 -0500 Subject: [PATCH 25/31] (#40) Determin Ingress Host and Port Values --- pre_install_report/README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pre_install_report/README.md b/pre_install_report/README.md index 74f2eb7..ac56aac 100644 --- a/pre_install_report/README.md +++ b/pre_install_report/README.md @@ -29,10 +29,10 @@ The following command provides usage details: python viya-ark.py pre-install-report -h ``` -**Note:** The tool currently expects an nginx ingress controller. Other ingress controllers will not be evaluated. +**Note:** The tool currently expects an NGINX Ingress controller. Other Ingress controllers will not be evaluated. -**Hints and Tips:** The Ingress Host and Port parameter values with Azure, nginx ingress controller and Load Balancer -can be determined with a kubectl command. You must specify the namespace where ingress in available as well as the ingress controller name like below: +**Hints and Tips:** The Ingress Host and Port parameter values with Azure, NGINX ingress controller and Load Balancer +can be determined with a kubectl command. You must specify the namespace where Ingress in available as well as the Ingress controller name like below: `kubectl -n get svc ` * The output from the command will look like the example output shown below: ``` From cba1eb464ab86e128dcff4d879575a062f317eac Mon Sep 17 00:00:00 2001 From: Latha Sivakumar Date: Wed, 11 Nov 2020 12:00:00 -0500 Subject: [PATCH 26/31] (#45) Remove Control Plane detail from pre-insatll report --- .../report_template_viya_pre_install_check.j2 | 21 ------------------- 1 file changed, 21 deletions(-) diff --git a/pre_install_report/templates/report_template_viya_pre_install_check.j2 b/pre_install_report/templates/report_template_viya_pre_install_check.j2 index 3eb55d7..ecd072d 100644 --- a/pre_install_report/templates/report_template_viya_pre_install_check.j2 +++ b/pre_install_report/templates/report_template_viya_pre_install_check.j2 @@ -107,27 +107,6 @@ {% endfor %} - -{% for master_nodes in master_data %} - {% if loop.index == 1 %} - -
-

Control Plane Information - {{master_nodes.issue}}

-
- - - - - - - -
Status
{{master_nodes.firstFailure}}
-
-
- {% endif %} -{% endfor %} - - {% for nodes_data in global_data %} {% set totals = '0' %} From 08c1d0c8d230258dfcbec663725603fa154ae956 Mon Sep 17 00:00:00 2001 From: Latha Sivakumar Date: Fri, 13 Nov 2020 15:42:15 -0500 Subject: [PATCH 27/31] (#47) Check that the kubeconfig file specified with KUBECONFIG environment variable exists --- .../library/utils/viya_messages.py | 2 ++ pre_install_report/pre_install_report.py | 8 ++++++-- .../test/test_pre_install_report.py | 16 ++++++++++++++++ 3 files changed, 24 insertions(+), 2 deletions(-) diff --git a/pre_install_report/library/utils/viya_messages.py b/pre_install_report/library/utils/viya_messages.py index 38aa4e9..c081803 100644 --- a/pre_install_report/library/utils/viya_messages.py +++ b/pre_install_report/library/utils/viya_messages.py @@ -27,6 +27,8 @@ INGRESS_CONTROLLER_ERROR = "ERROR: Ingress controller specified must be nginx. Check value on option -i " OUPUT_PATH_ERROR = "ERROR: The report output path is not valid {}. Check value on option -o " EXCEPTION_MESSAGE = "ERROR: {}" +KUBECONF_FILE_ERROR = "ERROR: The file specified in the KUBECONFIG environment does not exist. " \ + "Check that file {} exists." # command line return codes # SUCCESS_RC_ = 0 diff --git a/pre_install_report/pre_install_report.py b/pre_install_report/pre_install_report.py index 9e4709a..43e94d9 100644 --- a/pre_install_report/pre_install_report.py +++ b/pre_install_report/pre_install_report.py @@ -66,7 +66,7 @@ def _read_properties_file(): sys.exit(viya_messages.SET_LIMTS_ERROR_RC_) -def _read_environment_var(env_var): +def read_environment_var(env_var): """ This method verifies that the KUBECONFIG environment variable is set. @@ -74,6 +74,10 @@ def _read_environment_var(env_var): """ try: value_env_var = os.environ[env_var] + # Check if specified file exists + if not os.path.exists(value_env_var): + print(viya_messages.KUBECONF_FILE_ERROR.format(value_env_var)) + sys.exit(viya_messages.BAD_ENV_RC_) except Exception: print(viya_messages.KUBECONF_ERROR) sys.exit(viya_messages.BAD_ENV_RC_) @@ -201,7 +205,7 @@ def main(argv): sas_logger = ViyaARKLogger(report_log_path, logging_level=logging_level, logger_name="pre_install_logger") logger = sas_logger.get_logger() - _read_environment_var('KUBECONFIG') + read_environment_var('KUBECONFIG') if not found_ingress_controller or not found_ingress_host or not found_ingress_port: logger.error(viya_messages.OPTION_VALUES_ERROR) diff --git a/pre_install_report/test/test_pre_install_report.py b/pre_install_report/test/test_pre_install_report.py index 06b3405..7cabd72 100644 --- a/pre_install_report/test/test_pre_install_report.py +++ b/pre_install_report/test/test_pre_install_report.py @@ -23,6 +23,8 @@ from pre_install_report.library.pre_install_check_permissions import PreCheckPermissions from viya_ark_library.jinja2.sas_jinja2 import Jinja2TemplateRenderer from viya_ark_library.logging import ViyaARKLogger +from pre_install_report.pre_install_report import read_environment_var +from pre_install_report.library.utils import viya_messages _SUCCESS_RC_ = 0 _BAD_OPT_RC_ = 1 @@ -566,6 +568,20 @@ def test_get_calculated_aggregate_memory(): assert str(total_allocatable_memoryGi) == '62.3276481628418 Gi' +def test_kubconfig_file(): + old_kubeconfig = os.environ.get('KUBECONFIG') # /Users/cat/doc + os.environ['KUBECONFIG'] = 'blah_nonexistentfile_blah' + new_kubeconfig = os.environ.get('KUBECONFIG') # /Users/cat/doc + assert new_kubeconfig == 'blah_nonexistentfile_blah' + try: + read_environment_var('KUBECONFIG') + except SystemExit as exc: + assert exc.code == viya_messages.BAD_ENV_RC_ + pass + finally: + os.environ['KUBECONFIG'] = old_kubeconfig + + def test_check_permissions(): # namespace = 'default' params = {} From e4f1556d1a1b6b89620fe0ecb27ed23e4d475433 Mon Sep 17 00:00:00 2001 From: Latha Sivakumar Date: Fri, 13 Nov 2020 16:00:11 -0500 Subject: [PATCH 28/31] (#47) Check that the kubeconfig file specified with KUBECONFIG environment variable exists --- pre_install_report/test/test_pre_install_report.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pre_install_report/test/test_pre_install_report.py b/pre_install_report/test/test_pre_install_report.py index 7cabd72..3f57e39 100644 --- a/pre_install_report/test/test_pre_install_report.py +++ b/pre_install_report/test/test_pre_install_report.py @@ -579,7 +579,7 @@ def test_kubconfig_file(): assert exc.code == viya_messages.BAD_ENV_RC_ pass finally: - os.environ['KUBECONFIG'] = old_kubeconfig + os.environ['KUBECONFIG'] = str(old_kubeconfig) def test_check_permissions(): From 40beb2b5046503d106bac0e7bc4e5f912610593a Mon Sep 17 00:00:00 2001 From: Latha Sivakumar Date: Fri, 13 Nov 2020 22:03:40 -0500 Subject: [PATCH 29/31] (#49) Modify Information for Determining Ingress Port --- pre_install_report/README.md | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/pre_install_report/README.md b/pre_install_report/README.md index ac56aac..f6bf0c9 100644 --- a/pre_install_report/README.md +++ b/pre_install_report/README.md @@ -31,19 +31,23 @@ python viya-ark.py pre-install-report -h **Note:** The tool currently expects an NGINX Ingress controller. Other Ingress controllers will not be evaluated. -**Hints and Tips:** The Ingress Host and Port parameter values with Azure, NGINX ingress controller and Load Balancer -can be determined with a kubectl command. You must specify the namespace where Ingress in available as well as the Ingress controller name like below: -`kubectl -n get svc ` -* The output from the command will look like the example output shown below: +**Hints and Tips:** The values for the Ingress Host and Ingress Port options can be determined with kubectl commands. +The following section provides hints for a NGINX ingress controller of Type Load Balancer. _But, these following commands +may need to be modified to suit your Ingress controller deployment._ + +You must specify the namespace where Ingress controller in available as well as the Ingress controller name like below: ``` +kubectl -n get svc + +Sample output from the above command : NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer 10.0.00.000 55.147.22.101 80:31254/TCP,443:31383/TCP 28d ``` Use commands as shown below to determine the parameter values: ``` $ export INGRESS_HOST=externalIP=$(kubectl -n get service -o jsonpath='{.status.loadBalancer.ingress[*].ip}') -$ export INGRESS_HTTP_PORT=$(kubectl -n get service -o jsonpath='{.spec.ports[?(@.name=="http")].port}') -$ export INGRESS_HTTPS_PORT=$(kubectl -n get service -o jsonpath='{.spec.ports[?(@.name=="https")].port}') +$ export INGRESS_HTTP_PORT=$(kubectl -n get service -o jsonpath='{.spec.ports[?(@.name=="http")].nodePort}') +$ export INGRESS_HTTPS_PORT=$(kubectl -n get service -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}') ``` Use the values gathered on the command line for http or https as appropriate for your deployment: ``` From 657adeac0669c7d6cac19e0463d530d55e341193 Mon Sep 17 00:00:00 2001 From: FredPerrySAS <69848070+FredPerrySAS@users.noreply.github.com> Date: Mon, 16 Nov 2020 05:32:28 -0500 Subject: [PATCH 30/31] Edit README.md as requested --- pre_install_report/README.md | 24 +++++++++++++++++------- 1 file changed, 17 insertions(+), 7 deletions(-) diff --git a/pre_install_report/README.md b/pre_install_report/README.md index f6bf0c9..ca320f6 100644 --- a/pre_install_report/README.md +++ b/pre_install_report/README.md @@ -29,27 +29,37 @@ The following command provides usage details: python viya-ark.py pre-install-report -h ``` -**Note:** The tool currently expects an NGINX Ingress controller. Other Ingress controllers will not be evaluated. +**Note:** The tool currently expects an NGINX Ingress controller. Other Ingress controllers are not evaluated. -**Hints and Tips:** The values for the Ingress Host and Ingress Port options can be determined with kubectl commands. -The following section provides hints for a NGINX ingress controller of Type Load Balancer. _But, these following commands -may need to be modified to suit your Ingress controller deployment._ +### Hints + +The values for the Ingress Host and Ingress Port options can be determined with kubectl commands. +The following section provides hints for a NGINX Ingress controller of Type LoadBalancer. The following commands +may need to be modified to suit your Ingress controller deployment. + +You must specify the namespace where the Ingress controller is available as well as the Ingress controller name: -You must specify the namespace where Ingress controller in available as well as the Ingress controller name like below: ``` kubectl -n get svc +``` -Sample output from the above command : +Here is sample output from the command: + +``` NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer 10.0.00.000 55.147.22.101 80:31254/TCP,443:31383/TCP 28d ``` -Use commands as shown below to determine the parameter values: + +Use the following commands to determine the parameter values: + ``` $ export INGRESS_HOST=externalIP=$(kubectl -n get service -o jsonpath='{.status.loadBalancer.ingress[*].ip}') $ export INGRESS_HTTP_PORT=$(kubectl -n get service -o jsonpath='{.spec.ports[?(@.name=="http")].nodePort}') $ export INGRESS_HTTPS_PORT=$(kubectl -n get service -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}') ``` + Use the values gathered on the command line for http or https as appropriate for your deployment: + ``` -i nginx -H $INGRESS_HOST -p $INGRESS_HTTP_PORT -i nginx -H $INGRESS_HOST -p $INGRESS_HTTPS_PORT From 57d062e6d9031ce8ee1b75eae8584ab9540f1f66 Mon Sep 17 00:00:00 2001 From: Latha Sivakumar Date: Mon, 16 Nov 2020 18:10:30 -0500 Subject: [PATCH 31/31] (#52) Modify README for determining Ingress HTTP and HTTPS Ports --- pre_install_report/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pre_install_report/README.md b/pre_install_report/README.md index ca320f6..a4059ca 100644 --- a/pre_install_report/README.md +++ b/pre_install_report/README.md @@ -54,8 +54,8 @@ Use the following commands to determine the parameter values: ``` $ export INGRESS_HOST=externalIP=$(kubectl -n get service -o jsonpath='{.status.loadBalancer.ingress[*].ip}') -$ export INGRESS_HTTP_PORT=$(kubectl -n get service -o jsonpath='{.spec.ports[?(@.name=="http")].nodePort}') -$ export INGRESS_HTTPS_PORT=$(kubectl -n get service -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}') +$ export INGRESS_HTTP_PORT=$(kubectl -n get service -o jsonpath='{.spec.ports[?(@.name=="http")].port}') +$ export INGRESS_HTTPS_PORT=$(kubectl -n get service -o jsonpath='{.spec.ports[?(@.name=="https")].port}') ``` Use the values gathered on the command line for http or https as appropriate for your deployment: