Skip to content

Commit

Permalink
[infra] Python 1.6.0 (#898)
Browse files Browse the repository at this point in the history
* merge with master

* [infra] adds upload header to storage and define more modes (#795)

* feat(infra): adds upload header to storage

* fix(infra): replace if header exists in storage

* fix(infra): fix table_id in storage

* feat(infra): add mode `architecture`

* feat(infra): adjust mode

* feat(infra): add new modes

* feat(infra): adjust mode all

* feat(infra): adjust mode all

* feat(infra): change table-approve bd version

* feat(infra): trigger table-approve

* feat(infra): trigger table-approve

* feat(infra): trigger table-approve

* feat(infra): trigger table-approve

* feat(infra): trigger table-approve

* feat(infra): trigger table-approve

* feat(infra): trigger table-approve

* feat(infra): trigger table-approve

* feat(infra): add the option to use bqstorage api (#847)

* [infra] Add validate module [2] (#675)

* draft validate.py

* adding more validate_features

* fix attributes and define some helper functions to cut repetition

* improve storage upload exception

* swipe dataset_id and table_id in Table

* adds __init__ to modules

* creates single function to generate metadata

* redesign metadata API to be more intuitive

* add metadata create to CLI

* expose Metadata to users

* fix small typos

* add yaml generator

* improves exception

* adds ckanapi to reqs

* Fix comment_treatment metadata function

* Fix BaseDosDadosException imports

* Fix dataset init file checks

* Raise Exception in case of inconsistent metadata YAML order

* cria configs a partir do Metadata

* get rid o validate.py

* configs come with data

* add columns to table_config.yaml

* Add tests to metadata module

* Delete dataset_config.yaml

* Refactor test_metadata

* Improve metadata create tests

* Add more table metadata create tests

* Update metadata create docstring

* Add all test_metadata tests placeholders

* Add tests for Metadata is_updated method

* First working version of Metadata validate method

* Add Metadata validate tests

* Improve metadata validate and its tests

* Add metadata is_updated CLI entrypoint

* Add Metadata validate CLI entrypoint

* First metadata publish version

* Fix metadata create/is_updated bugs, improve validate tests

* Fix metadata's test_create_if_exists_pass

* Refactor metadata code, improve validate method

* Add metadata publish CLI entrypoint

* Fix publish bugs, add resource_patch for bdm_table patches

* Add response return value to publish, improve exceptions

* Add metadata publish tests

* Improve metadata publish and validate docstrings

* Add partition_columns option to metadata create

* Call is_updated before publish

* Fix partitions_writer_condition

* Fix ckan_data_dict

* Update CKAN_URL

* Integrate Table.create and Metadata

* Fix YAML generation for array fields

* feat(infra): adds _make_publish_sql

* fix(infra): add partition columns to be created, fix dataset_id in dataset and autofill type from arq sheet

* fix(infra): back patitions to str

* fix(infra): enhance organization metadata validation

* fix(infra): YAML complex fields are generated even if there is no data available

* feat(infra): add extras field to dataset validation

* fix(infra): clean spaces and put comma

* fix(infra): partitions from string to list in _is_partitioned function

* fix(infra): fix table_description.txt for tb.publish()

* fix(infra): improve update_columns doc string

* fix(infra): point metadata.CKAN_URL to staging website

* fix(infra): handle new dataset/table case in Metadata.is_updated

* Make CKAN_API_KEY and CKAN_URL come from config.toml

* bump pyproject to 1.6.0-a0

* Add ckan config variables builder

* Add default ckan config to configs/config.toml

* Raise error in case of no CKAN_API_KEY when publishing

* fix(infra): update ruamel.yaml and python dependencies

* fix(infra): base initiation, migrate ckan_url and api_key to __init__

* fix(infra): handle ckan config None values

* fix(infra): handle_complex_fields get correct data

* feat(infra): improve update_columns

* feat(infra): improve update_columns

* fix(infra): change coluna to nome

* bump to 1.6.0a4

* fix(infra): bump to 1.6.0a5

* fix(infra): force utf-8 in all open methods

* feat(infra): release 1.6.0a6

* fix(infra): fix update_columns encoding

* feat(infra): pump version 1.6.0a7

* Add extra dataset metadata fields for validation

* Improve metadata validation

* fix(infra): refactor metadata's ckan_data_dict

* fix(infra): remove input_hints from YAMLs

* fix(infra): shrink organization dataset YAML field

* feat(infra): bump to version 1.6.0-alpha.8

* feat(infra): add test_create_force_columns_is_true metadata test

* feat(infra): refactor metadata tests, add test_force_columns_is_false

* feat(infra): refactor metadata tests

* feat(infra): add partition_columns tests

* fix(infra): refatora o pacote metadata (#826)

* fix(infra): refatora o pacote metadata

* fix(infra): adiciona parte da refatoração

* fix(infra): corrige erros da refatoração

* feat(infra): adiciona suporte ao comando 'python -m'

* feat(infra): adiciona opção de versão

* feat(infra): formata o código com black

* fix(infra): corrige uns testes e comenta outros

* fix(infra): nullify yaml's partitions in case of not-None empty values

* fix(infra): fix Metadata.publish tests, remove debugging code

* feat(infra): make creation of table_config.yaml only optional

* fix(infra): make Metadata.validate work with new datasets and tables

* feat(infra): make Metadata.publish handle new datasets or tables

* fix(infra): create all dataset files

* fix(infra): draft new dataset_description.txt

* fix(infra): make table.py work with new YAML, refactor and fix tests

* fix(infra): handle non-defined variables for dataset_description.txt template

* refactor(infra): make Table and Dataset use Metadata as a component

* fix(infra): add gcloud variables to YAML through config.toml

* feat(infra): bump to 1.6.0-a9

* fix(infra): adiciona verificação de organização (#869)

* fix(infra): adiciona verificação de organização

* fix(infra): formatação com black

* fix(infra): altera nome do trigger de data checks

* feat(infra): rascunho da action de metadata checks

* Revert "fix(infra): adiciona verificação de organização (#869)"

This reverts commit c82d70a.

* fix(infra): bring back all dataset_config.yaml fields to ckan_data_dict

* fix(infra): ordena as bibliotecas

* fix(infra): corrige formatação

* fix/validate: corrige validate e adiciona actions (#876)

* fix(infra): adiciona verificação de organização

* fix(infra): formatação com black

* fix(infra): altera nome do trigger de data checks

* feat(infra): rascunho da action de metadata checks

* [dados-fix] Sobe INPC (#879)

* feat(docs): clarifications on partitions, temporal_coverage, suffixes. (#846)

* fix(infra): inicio das correções dos testes

* fix(infra): inicio das correções dos testes

* fix(infra): mais alterações nos testes

* [dados-bot] br_ms_vacinacao_covid19 (2021-10-18) (#884)

Co-authored-by: terminal_name <github_email>

* [dados-bot] br_ms_vacinacao_covid19 (2021-10-19) (#888)

Co-authored-by: terminal_name <github_email>

* [dados-atualizacao] br_anp_precos_combustiveis (#883)

* atualiza dados dos preços de combustiveis

* corrige erro de português no table_description

* fix(infra): corrige ordenação das bibliotecas

* fix(infra): corrige sintaxe nova

Co-authored-by: Gustavo Aires Tiago <[email protected]>
Co-authored-by: Ricardo Dahis <[email protected]>
Co-authored-by: Lucas Moreira <[email protected]>

* Revert "fix/validate: corrige validate e adiciona actions (#876)"

This reverts commit 2d3fa09.

* Revert "fix(infra): corrige formatação"

This reverts commit cb19f31.

* Revert "fix(infra): ordena as bibliotecas"

This reverts commit 698db35.

* Revert "Merge branch 'python-1.6.0' into add_validate_module_2"

This reverts commit 9c305f2, reversing
changes made to aee8c2a.

* feat(infra): add support for organization metadata

* fix(infra): complete all functions and methods docstrings

* docs(infra): add metadata entrypoints walkthrough to docs

Co-authored-by: hellcassius <[email protected]>
Co-authored-by: joaoc <[email protected]>
Co-authored-by: d116626 <[email protected]>
Co-authored-by: Vinicius Aguiar <[email protected]>
Co-authored-by: Gustavo Aires Tiago <[email protected]>
Co-authored-by: Ricardo Dahis <[email protected]>
Co-authored-by: Lucas Moreira <[email protected]>

* fix(infra): use basedosdados-dev for inexistent dataset test

* fix(infra): update bases with master files

* feat(infra): pump version

* fix(infra): update click dependency

* fix(infra): force setup.py to use click==8.0.3

* feat(infra): add new modes to cli help

* Update colab_data.md

* fix(infra): fix none in _load_schema

* fix(infra): fix the case when table are added for the first time

* feat(infra): pump version

* fix(infra): try to fix merge conflicts

* fix(infra): fix data-check to master

* fix(infra): fix data-check to master

* feat(infra): add url and api_key in env action

* fix(infra): remove space in env-setup

* feat(infra): add metadata validate action

* fix(infra): change actions bd version

* feat(infra): trigger md validate

* fix(infra): change action trigger

* feat(infra): test table-approve

* feat(infra): test table-approve

* feat(infra): test table-approve

* feat(infra): test table-approve, pump version

* feat(infra): test table-approve, pump version

* feat(infra): test table-approve

* feat(infra): test table-approve

* feat(infra): test table-approve

* feat(infra): test table-approve

* feat(infra): test table-approve

* feat(infra): test table-approve

* feat(infra): test table-approve

* feat(infra): test table-approve

* feat(infra): pusblish rais

* feat(infra): pusblish rais

* feat(infra): pusblish rais

* feat(infra): pusblish rais

* feat(infra): pusblish rais

* fix(infra): fix _load_schema and publish rais

* fix(infra): publish rais

* fix(infra): publish rais

* fix(infra): publish rais

* fix(infra): publish rais

* fix(infra): publish rais

* fix(infra): publish rais

* fix(infra): publish rais

* feat: updates diretorio_escola, closes #921

* fix(infra): publish rais

* fix(infra): publish escolas

* fix(infra): remove lint check

* fix(infra): try to use storage retry policy

* fix(infra): tb-approve bd version

* fix(infra): pump storage version

* fix(infra): add conditional retry

* fix(infra): add conditional retry

* fix(infra): publish rais

* fix(infra): publish rais

* fix(infra): publish rais

* fix(infra): publish rais

* fix(infra): publish rais

* fix(infra): publish rais

* fix(infra): publish rais

* fix(infra): publish rais

* fix(infra): publish rais

* fix(infra): publish rais

* fix(infra): publish rais vinculos

* fix(infra): publish rais vinculos

* fix(infra): change metadata-validate trigger'

* fix(infra): adjust copy_table

* fix(infra): change metadata-validate trigger

* fix(infra): change table approve logs

* fix(infra): change metadata-validate logs

* fix(infra): change tb-app mode order

* fix(infra): change tb-app logs

* feat(infra): reactivate actions

* feat(infra): change action logs

* feat(infra): deactivate data-check

* feat(infra): change actions logs

* feat(infra): change actions logs

* feat(infra): republish rais

* fix(infra): improve validate metadata tests

* tests(infra): add test for invalid organization entry

* feat(infra): add --all and --if_exists args to publish

* feat(infra): bump to 1.6.0-b20

* fix(infra): prepare data-check

* fix(infra):test tb-app

* fix(infra): change data-check action

* fix(infra): test data-check

* fix(infra): test data-check

* fix(infra): test data-check

* fix(infra): change ci trigger

* fix(infra): change ci trigger

* fix(infra): change ci trigger

* fix(infra): change ci trigger

* fix(infra): change ci trigger

* fix(infra): test data-check

* docs(infra): add --all cli option docs

* fix(infra): debug data-check

* fix(infra): fix data-check ckan api env variable

* debug(infra): verify data-check env variables

* debug(infra): fix getenv

* feat(infra): test data-check

* feat(infra): test data-check

* feat(infra): test data-check

* feat(infra): test data-check

* feat(infra): test data-check

* feat(infra): test data-check

* feat(infra): test data-check

* feat(infra): test data-check

* debug(infra): test runtime env variables

* debug(infra): try os.environ.get for data-check

* debug(infra): test cache for data-check

* fix(infra): revert data-check changes

* fix(infra): data-check  original trigger

* fix(infra): data-check  original envs

* fix(infra): deactivate tb-app branch trigger

* fix(infra): update docs folder based on master branch

* feat(infra): add update_locally option to metadata publish

* feat(infra): add update_locally to metadata publish cli

* solves #issue-181

* (feat) infra:COMMIT FINAL CARALHO!!!!!

Co-authored-by: d116626 <[email protected]>
Co-authored-by: Vítor Mussa <[email protected]>
Co-authored-by: hellcassius <[email protected]>
Co-authored-by: Vinicius Aguiar <[email protected]>
Co-authored-by: Gustavo Aires Tiago <[email protected]>
Co-authored-by: Ricardo Dahis <[email protected]>
Co-authored-by: Lucas Moreira <[email protected]>
Co-authored-by: rdahis <[email protected]>
  • Loading branch information
9 people authored Nov 22, 2021
1 parent 3b2c07a commit 3ca2b3c
Show file tree
Hide file tree
Showing 56 changed files with 54,560 additions and 884 deletions.
18 changes: 8 additions & 10 deletions .github/workflows/data-check.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ on:
pull_request_target:
types:
- labeled

jobs:
guard:
runs-on: ubuntu-latest
Expand All @@ -25,7 +24,6 @@ jobs:
steps.changes.outputs.workflows == 'true' &&
github.event.pull_request.head.repo.full_name != github.repository
run: exit 1

get-changes:
needs: guard
runs-on: ubuntu-latest
Expand Down Expand Up @@ -57,22 +55,22 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.9.x'
python-version: "3.9.x"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install basedosdados==1.5.2 pyarrow pytest toml
- name: Set up basedosdados environment
run: |
cd .github/workflows/env-setup
python env_setup.py
pip install basedosdados==1.6.0 pyarrow pytest toml
- name: Set up base dos dados environment
shell: bash
env:
BUCKET_NAME: basedosdados-dev
env:
BUCKET_NAME: basedosdados-dev
PROJECT_NAME_PROD: basedosdados-dev
PROJECT_NAME_STAGING: basedosdados-dev
GCP_BD_PROD: ${{ secrets.GCP_BD_DEV_PROD }}
GCP_BD_STAGING: ${{ secrets.GCP_BD_DEV_STAGING }}
CKAN_URL: "https://staging.basedosdados.org"
CKAN_API_KEY: ${{ secrets.CKAN_STAGING }}
run: python .github/workflows/env-setup/env_setup.py
- name: Test data and fill report
run: pytest -v .github/workflows/data-check
shell: bash
Expand Down
20 changes: 14 additions & 6 deletions .github/workflows/data-check/checks.yaml
Original file line number Diff line number Diff line change
@@ -1,3 +1,11 @@
# test_select_all_works:
# id: {{ dataset_id }}/{{ table_id }}/1
# name: Check if select all works
# query: |
# SELECT EXISTS (
# SELECT *
# FROM `{{ project_id }}.{{ dataset_id }}.{{ table_id }}`
# ) AS sucess

test_table_exists:
id: {{ dataset_id }}/{{ table_id }}/0
Expand Down Expand Up @@ -26,19 +34,19 @@ test_table_has_no_null_column:
SELECT col_name, nulls_count / total_count null_percent
FROM n_nulls, n_total
test_primary_key_has_unique_values:
test_identifying_column_has_unique_values:
id: {{ dataset_id }}/{{ table_id }}/3
name: Check if primary key has unique values
name: Check if identifying column has unique values
query: |
{% if primary_keys is defined and primary_keys is not none -%}
{% if identifying_columns is defined and identifying_columns is not none -%}
SELECT
COUNT(
DISTINCT CONCAT(
{% for primary_key in primary_keys -%}
IFNULL(SAFE_CAST({{ primary_key }} AS STRING), " "), "&",
{% for identifying_column in identifying_columns -%}
IFNULL(SAFE_CAST({{ identifying_column }} AS STRING), " "), "&",
{% endfor -%}
"EOF"
)
) / COUNT(*) unique_percentage
FROM `{{ project_id }}.{{ dataset_id }}.{{ table_id }}` t
{% endif %}
{% endif %}
4 changes: 2 additions & 2 deletions .github/workflows/data-check/test_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -83,8 +83,8 @@ def test_table_has_no_null_column(configs):
assert result.empty or (result.null_percent.max() < 1)


def test_primary_key_has_unique_values(configs):
config = configs["test_primary_key_has_unique_values"]
def test_identifying_column_has_unique_values(configs):
config = configs["test_identifying_column_has_unique_values"]
result = fetch_data(config)

result = result.unique_percentage.values[0]
Expand Down
4 changes: 4 additions & 0 deletions .github/workflows/env-setup/env_setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,10 @@ def env_setup():
),
},
},
"ckan": {
"url": os.getenv("CKAN_URL"),
"api_key": decoding_base64(os.environ.get("CKAN_API_KEY")).replace("\n", ""),
},
}

# load the secret of prod and staging data
Expand Down
2 changes: 2 additions & 0 deletions .github/workflows/etl-caged.yml
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,8 @@ jobs:
PROJECT_NAME_STAGING: basedosdados-staging
GCP_BD_PROD: ${{ secrets.GCP_BD_DEV_PROD }}
GCP_BD_STAGING: ${{ secrets.GCP_BD_DEV_STAGING }}
CKAN_URL: "https://staging.basedosdados.org"
CKAN_API_KEY: ${{ secrets.CKAN_STAGING }}
- name: Run ETL-CAGED
run: python bases/br_me_caged/code/caged_novo/caged_novo.py
shell: bash
57 changes: 57 additions & 0 deletions .github/workflows/metadata-validate.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
name: metadata-validate

on:
push:
paths:
- 'bases/**'
jobs:
get-changes:
runs-on: ubuntu-latest
steps:
- name: Check file changes
id: file_changes
uses: trilom/[email protected]
- name: Copy file changes
run: cp $HOME/files.json files.json
- name: Upload file changes
uses: actions/upload-artifact@v2
with:
name: push-changes
path: files.json

metadata-validate:
needs: get-changes
runs-on: ubuntu-latest
steps:
- name: Check out repository
uses: actions/checkout@v2
- name: Download changes
uses: actions/download-artifact@v2
with:
name: push-changes
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: "3.9.x"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install basedosdados==1.6.0 toml
- name: Set up base dos dados environment
run: python .github/workflows/env-setup/env_setup.py
shell: bash
env:
BUCKET_NAME: basedosdados
PROJECT_NAME_PROD: basedosdados
PROJECT_NAME_STAGING: basedosdados-staging
GCP_BD_PROD: ${{ secrets.GCP_TABLE_APPROVE_PROD }}
GCP_BD_STAGING: ${{ secrets.GCP_TABLE_APPROVE_STAGING }}
CKAN_URL: "https://staging.basedosdados.org"
CKAN_API_KEY: ${{ secrets.CKAN_STAGING }}
- name: Metadata validate
run: python -u .github/workflows/metadata-validate/metadata_validate.py
shell: bash
env:
PROJECT_ID: ${{ secrets.GCP_MAIN_PROJECT_ID }}
BUCKET_NAME_BACKUP: basedosdados-backup
BUCKET_NAME_DESTINATION: basedosdados
112 changes: 112 additions & 0 deletions .github/workflows/metadata-validate/metadata_validate.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
import json
import os
import traceback
from pathlib import Path
from pprint import pprint

import basedosdados as bd
import yaml
from basedosdados import Dataset, Storage
from basedosdados.upload.base import Base
from basedosdados.upload.metadata import Metadata


def tprint(title=""):
if not len(title):
print(
"\n",
"#" * 80,
"\n",
)
else:
size = 38 - int(len(title) / 2)
print("\n\n\n", "#" * size, title, "#" * size, "\n")


def load_configs(dataset_id, table_id):
# get the config file in .basedosdados/config.toml
configs_path = Base()._load_config()

# get the path to metadata_path, where the folder bases with metadata information
metadata_path = configs_path["metadata_path"]

# get the path to table_config.yaml
table_path = f"{metadata_path}/{dataset_id}/{table_id}"

return (
# load the table_config.yaml
yaml.load(open(f"{table_path}/table_config.yaml", "r"), Loader=yaml.FullLoader),
# return the path to .basedosdados configs
configs_path,
)


def get_table_dataset_id():
# load the change files in PR || diff between PR and master
changes = Path("files.json").open("r")
changes = json.load(changes)

# create a dict to save the dataset and source_bucket related to each table_id
dataset_table_ids = {}

# create a list to save the table folder path, for each table changed in the commit
table_folders = []
for change_file in changes:
# get the directory path for a table with changes
file_dir = Path(change_file).parent

# append the table directory if it was not already appended
if file_dir not in table_folders:
table_folders.append(file_dir)

# construct the iterable for the table_config paths
table_config_paths = [Path(root / "table_config.yaml") for root in table_folders]

# iterate through each config path
for filepath in table_config_paths:

# check if the table_config.yaml exists in the changed folder
if filepath.is_file():

# load the found table_config.yaml
table_config = yaml.load(open(filepath, "r"), Loader=yaml.SafeLoader)

# add the dataset and source_bucket for each table_id
dataset_table_ids[table_config["table_id"]] = {
"dataset_id": table_config["dataset_id"],
"source_bucket_name": table_config["source_bucket_name"],
}

return dataset_table_ids


def metadata_validate():
# find the dataset and tables of the PR
dataset_table_ids = get_table_dataset_id()

# print dataset tables info
tprint("TABLES FOUND")
pprint(dataset_table_ids)
tprint()

# iterate over each table in dataset of the PR
for table_id in dataset_table_ids.keys():
dataset_id = dataset_table_ids[table_id]["dataset_id"]
source_bucket_name = dataset_table_ids[table_id]["source_bucket_name"]

try:
# push the table to bigquery
md = Metadata(dataset_id=dataset_id, table_id=table_id)

md.validate()
tprint(f"SUCESS VALIDATE {dataset_id}.{table_id}")
tprint()

except Exception as error:
tprint(f"ERROR ON {dataset_id}.{table_id}")
traceback.print_exc()
tprint()


if __name__ == "__main__":
metadata_validate()
26 changes: 16 additions & 10 deletions .github/workflows/python-ci.yml
Original file line number Diff line number Diff line change
@@ -1,13 +1,15 @@
name: python-ci

on:
push:
branches:
- master
pull_request:
# branches:
# - master
# - python-1.6.0
paths:
- .github/**
- python-package/**
workflow_dispatch:


jobs:
guard:
runs-on: ubuntu-latest
Expand Down Expand Up @@ -39,12 +41,12 @@ jobs:
run: |
python -m pip install --upgrade pip
python -m pip install isort black pylint
- name: Check library sort
run: python -m isort --check-only --profile black .github python-package
- name: Check code format
run: python -m black --check .github python-package
- name: Check lint
run: python -m pylint --exit-zero .github/**/*.py python-package
# - name: Check library sort
# run: python -m isort --check-only --profile black .github python-package
# - name: Check code format
# run: python -m black --check .github python-package
# - name: Check lint
# run: python -m pylint --exit-zero .github/**/*.py python-package
build-linux:
needs: lint
runs-on: ubuntu-latest
Expand Down Expand Up @@ -74,6 +76,8 @@ jobs:
PROJECT_NAME_STAGING: basedosdados-dev
GCP_BD_PROD: ${{ secrets.GCP_BD_DEV_PROD }}
GCP_BD_STAGING: ${{ secrets.GCP_BD_DEV_STAGING }}
CKAN_URL: "https://staging.basedosdados.org"
CKAN_API_KEY: ${{ secrets.CKAN_STAGING }}
shell: bash
- name: Test
if: github.event_name == 'pull_request'
Expand Down Expand Up @@ -121,6 +125,8 @@ jobs:
PROJECT_NAME_STAGING: basedosdados-dev
GCP_BD_PROD: ${{ secrets.GCP_BD_DEV_PROD }}
GCP_BD_STAGING: ${{ secrets.GCP_BD_DEV_STAGING }}
CKAN_URL: "https://staging.basedosdados.org"
CKAN_API_KEY: ${{ secrets.CKAN_STAGING }}
- name: Test
if: github.event_name == 'pull_request'
run: |
Expand Down
Loading

0 comments on commit 3ca2b3c

Please sign in to comment.