diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 3a3016ff9..c3ca77699 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -11,8 +11,8 @@ and the individual `CONTRIBUTING.md` files in each respective project. This repo contains documentation about interacting with the community as well as standard and processes that apply to all repos in `tektoncd`. PRs are welcome and should follow -[the tektoncd pull request process](process.md#pull-request-process). +[the tektoncd pull request process](./process/README.md#pull-request-process). -The [OWNERS](OWNERS) of this repo are the [members of the Tekton governing board](goverance.md). +The [OWNERS](OWNERS) of this repo are the [members of the Tekton governing board](./governance.md). Any substantial changes to the policies in this repo should be reviewed by at least 50% of the governing board. diff --git a/README.md b/README.md index bd0d16e97..b3279b508 100644 --- a/README.md +++ b/README.md @@ -22,7 +22,7 @@ See our standards regarding: - [Code of conduct](code-of-conduct.md) - [Design principles](design-principles.md) -- [Commits](standards.md#commits) +- [Commits](standards.md#commit-messages) - [Code](standards.md#code) - [User profiles](user-profiles.md) - [Releases](releases.md) @@ -33,7 +33,7 @@ Find out about our processes: - [Propose features](./process/README.md#proposing-features) - [Contributor ladder](./process/contributor-ladder.md) - Pull request [reviews](./process/README.md#reviews) and [process](./process/README.md#pull-request-process) -- [Propose projects](./process/README.md#proposing-projects) +- [Propose projects](./process/projects.md) - [GitHub Org Management](org/README.md), including [requirements to join the org](org/README.md#requirements) - [The CDF CLA](./process/README.md#cla) diff --git a/adopters.md b/adopters.md index 83f5454da..3ea7544fb 100644 --- a/adopters.md +++ b/adopters.md @@ -8,41 +8,41 @@ Tell us about, we welcome [pull requests](https://github.com/tektoncd/community/ ## Open Source Projects -| Project Name | Description | Repository | -|--------------|-------------|---------| -| [Shipwright](https://shipwright.io/) | Shipwright is an extensible framework for building container images on Kubernetes. | [https://github.com/shipwright-io](github.com/shipwright-io) -| [Jenkins X](https://jenkins-x.io/) | All In One CI/CD including everything you need to start exploring Kubernetes | [github.com/jenkins-x](https://github.com/jenkins-x) | -| kfp-tekton | Kubeflow Pipelines SDK for Tekton | [github.com/kubeflow/kfp-tekton](https://github.com/kubeflow/kfp-tekton/) | -| OpenShift Pipelines | OpenShift Pipelines is a cloud-native, continuous integration and continuous delivery (CI/CD) solution based on Kubernetes resources. | [github.com/openshift-pipelines](https://github.com/openshift-pipelines) | -| FRSCA | OpenSSF's Factory for Repeatable Secure Creation of Artifacts (aka FRSCA pronounced Fresca) aims to help secure the supply chain by securing build pipelines. | [github.com/buildsec/frsca](https://github.com/buildsec/frsca) | -| VSCode Tekton extension | VSCode extension to manage Tekton resources. | [github.com/redhat-developer/vscode-tekton](https://github.com/redhat-developer/vscode-tekton) | -| JetBrains IDEs Tekton plugin | JetBrains IDEs plugin to manage Tekton resources. | [github.com/redhat-developer/intellij-tekton](https://github.com/redhat-developer/intellij-tekton) | -| Automatiko Approval task | Brings approval tasks into Tekton pipeline with various strategies (single approval, multiple approvals, four eye based approval). | [github.com/automatiko-io/automatiko-approval-task](https://github.com/automatiko-io/automatiko-approval-task) | -| [EPAM Delivery Platform (EDP)](https://epam.github.io/edp-install/) | Cloud-agnostic SaaS/PaaS solution for software development with a pre-defined set of CI/CD pipelines and tools, enabling to kickstart development quickly with established processes for code review, release, versioning, branching, and deployment. | [github.com/epam/edp-tekton](https://github.com/epam/edp-tekton) +| Project Name | Description | Repository | +|------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------| +| [Shipwright](https://shipwright.io/) | Shipwright is an extensible framework for building container images on Kubernetes. | [https://github.com/shipwright-io](https://github.com/shipwright-io) | +| [Jenkins X](https://jenkins-x.io/) | All In One CI/CD including everything you need to start exploring Kubernetes | [github.com/jenkins-x](https://github.com/jenkins-x) | +| kfp-tekton | Kubeflow Pipelines SDK for Tekton | [github.com/kubeflow/kfp-tekton](https://github.com/kubeflow/kfp-tekton/) | +| OpenShift Pipelines | OpenShift Pipelines is a cloud-native, continuous integration and continuous delivery (CI/CD) solution based on Kubernetes resources. | [github.com/openshift-pipelines](https://github.com/openshift-pipelines) | +| FRSCA | OpenSSF's Factory for Repeatable Secure Creation of Artifacts (aka FRSCA pronounced Fresca) aims to help secure the supply chain by securing build pipelines. | [github.com/buildsec/frsca](https://github.com/buildsec/frsca) | +| VSCode Tekton extension | VSCode extension to manage Tekton resources. | [github.com/redhat-developer/vscode-tekton](https://github.com/redhat-developer/vscode-tekton) | +| JetBrains IDEs Tekton plugin | JetBrains IDEs plugin to manage Tekton resources. | [github.com/redhat-developer/intellij-tekton](https://github.com/redhat-developer/intellij-tekton) | +| Automatiko Approval task | Brings approval tasks into Tekton pipeline with various strategies (single approval, multiple approvals, four eye based approval). | [github.com/automatiko-io/automatiko-approval-task](https://github.com/automatiko-io/automatiko-approval-task) | +| [EPAM Delivery Platform (EDP)](https://epam.github.io/edp-install/) | Cloud-agnostic SaaS/PaaS solution for software development with a pre-defined set of CI/CD pipelines and tools, enabling to kickstart development quickly with established processes for code review, release, versioning, branching, and deployment. | [github.com/epam/edp-tekton](https://github.com/epam/edp-tekton) | ## Vendors -| Company | How we use Tekton | Notes | May use logo in Tekton public presentations (optional) | -|---------|-----------|---------|----------| -| IBM | IBM offers Tekton as a Service through IBM Cloud Continuous Delivery Pipelines| [IBM Cloud Continuous Delivery](https://www.ibm.com/cloud/continuous-delivery)| Yes | -| Google | Tekton is designed to work well with Google Cloud-specific Kubernetes tooling. This includes deployments to Google Kubernetes Engine as well as artifact storage and scanning using Container Registry. You can also build, test, and deploy across multiple environments such as VMs, serverless, Kubernetes, or Firebase.| [Tekton on Google Cloud](https://cloud.google.com/tekton)| Yes | +| Company | How we use Tekton | Notes | May use logo in Tekton public presentations (optional) | +|---------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------|--------------------------------------------------------| +| IBM | IBM offers Tekton as a Service through IBM Cloud Continuous Delivery Pipelines | [IBM Cloud Continuous Delivery](https://www.ibm.com/cloud/continuous-delivery) | Yes | +| Google | Tekton is designed to work well with Google Cloud-specific Kubernetes tooling. This includes deployments to Google Kubernetes Engine as well as artifact storage and scanning using Container Registry. You can also build, test, and deploy across multiple environments such as VMs, serverless, Kubernetes, or Firebase. | [Tekton on Google Cloud](https://cloud.google.com/tekton) | Yes | ## End Users -| Company | How we use Tekton | Notes | May use logo in Tekton public presentations (optional) | -|---------|-----------|---------|----------| -| IBM | Tekton is used as the basis of IBM Cloud's broadly adopted internal DevSecOps pipelines | | Yes | -| Marriott Vacations Worldwide | All corporate CI/CD pipelines were migrated from Jenkins to Tekton, running on ROSA (RedHat Openshift Service on AWS) | | Yes | -| Nubank | Tekton is the basis of the Nubank's primary CI/CD platform, supporting millions of builds per month | | Yes | -| OneStock | Tekton is central to OneStock's CI/CD operations, managing 35+ different pipelines for 500+ repositories | | Yes | +| Company | How we use Tekton | Notes | May use logo in Tekton public presentations (optional) | +|------------------------------|-----------------------------------------------------------------------------------------------------------------------|-------|--------------------------------------------------------| +| IBM | Tekton is used as the basis of IBM Cloud's broadly adopted internal DevSecOps pipelines | | Yes | +| Marriott Vacations Worldwide | All corporate CI/CD pipelines were migrated from Jenkins to Tekton, running on ROSA (RedHat Openshift Service on AWS) | | Yes | +| Nubank | Tekton is the basis of the Nubank's primary CI/CD platform, supporting millions of builds per month | | Yes | +| OneStock | Tekton is central to OneStock's CI/CD operations, managing 35+ different pipelines for 500+ repositories | | Yes | ## Tekton Friends Meet more [Tekton friends](https://github.com/tektoncd/friends/)

-Tekton Friends logo (Tekton logo with rabbit and dog +Tekton Friends logo (Tekton logo with rabbit and dog

diff --git a/governance.md b/governance.md index 0b6e12d57..301ef7391 100644 --- a/governance.md +++ b/governance.md @@ -22,7 +22,7 @@ one year: every year either two or three of the seats are up for election. ### Current members | Full Name | Company | GitHub | Slack | Elected On | Until | -| ----------------- | :--------: | ------------------------------------------- | ------------------------------------------------------------- | ---------- | -------- | +|-------------------|:----------:|---------------------------------------------|---------------------------------------------------------------|------------|----------| | Andrea Frittoli | IBM | [afrittoli](https://github.com/afrittoli) | [@Andrea Frittoli](https://tektoncd.slack.com/team/UJ411P2CC) | Feb 2022 | Feb 2024 | | Billy Lynch | Chainguard | [wlynch](https://github.com/wlynch) | [@Billy Lynch](https://tektoncd.slack.com/team/UJ7BLGSB0) . | Feb 2023 | Feb 2024 | | Dibyo Mukherjee | Google | [dibyom](https://github.com/dibyom) | [@Dibyo Mukherjee](https://tektoncd.slack.com/team/UJ73HM7PZ) | Feb 2023 | Feb 2025 | @@ -35,7 +35,7 @@ distributed across the five members of the committee. #### Former members ❤️ | Full Name | GitHub | Slack | Elected On | Until | -| --------------- | --------------------------------------------- | ------------------------------------------------------------- | ------------------- | -------- | +|-----------------|-----------------------------------------------|---------------------------------------------------------------|---------------------|----------| | Priya Wadhwa | [priyawadhwa](https://github.com/priyawadhwa) | [@Priya Wadhwa](https://tektoncd.slack.com/team/U02T0CS9PN0) | Feb 2022 | Feb 2023 | | Christie Wilson | [bobcatfish](https://github.com/bobcatfish) | [@Christie Wilson](https://tektoncd.slack.com/team/UJ6DECY78) | Bootstrap committee | Feb 2023 | | Andrew Bayer | [abayer](https://github.com/abayer) | [@Andrew Bayer](https://tektoncd.slack.com/team/UJ6DJ4MSS) | Feb 2020 | Feb 2022 | @@ -85,7 +85,7 @@ The committee is responsible for a series of specific artifacts and activities: - The [Code of Conduct](code-of-conduct.md) and handling violations - The [Project Communication Channels](contact.md) -- The [Contribution Process](process.md) and +- The [Contribution Process](./process/README.md) and [Development Standards](standards.md) - The [Tekton Mission and Vision](roadmap.md) - Select [election officers](#election-officers) to run elections @@ -108,7 +108,7 @@ be altered or dropped. ### Voter Eligibility -Anyone who has at least 15 [contributions](process.md#contributions) in the last +Anyone who has at least 15 [contributions](./process/README.md#contributions) in the last 12 months. The [dashboard on tekton.devstats.cd.foundation](https://tekton.devstats.cd.foundation/d/9/developer-activity-counts-by-repository-group-table?orgId=1&var-period_name=Last%20year&var-metric=contributions&var-repogroup_name=All&var-country_name=All) will show GitHub based contributions; contributions that are not GitHub based diff --git a/org/README.md b/org/README.md index aaaf22d4b..871d970a0 100644 --- a/org/README.md +++ b/org/README.md @@ -18,7 +18,7 @@ are a member of the tektoncd org or not! If you are regularly contributing to repos in tektoncd, then you can become a member of the Tekton GitHub organization in order to have tests run against your -pull requests without requiring [`ok-to-test`](process.md#prow-commands). +pull requests without requiring [`ok-to-test`](../process/README.md#prow-commands). Being part of the org also makes it possible to have issues assigned. To be eligible to become a member of the org you must (note that this is at the diff --git a/process/README.md b/process/README.md index f84e29932..76b3578d7 100644 --- a/process/README.md +++ b/process/README.md @@ -68,13 +68,13 @@ In general, you should follow the [Tekton Enhancement Proposals (`TEP`) process](./tep-process.md). A Tekton Enhancement Proposal (TEP) is a way to propose, communicate and coordinate on new efforts for the Tekton project. You can read the full details of the project in -[TEP-1](./teps/0001-tekton-enhancement-proposal-process.md). +[TEP-1](../teps/0001-tekton-enhancement-proposal-process.md). Some suggestions for how to do this: 1. Write up a design doc and share it with - [the mailing list](contact.md#mailing-list). -2. Bring your design/ideas to [our working group meetings](working-groups.md) + [the mailing list](../contact.md#mailing-list). +2. Bring your design/ideas to [our working group meetings](../working-groups.md) for discussion. 3. Write a [`TEP`](./tep-process.md) from the initial design doc and working group feedback. @@ -87,7 +87,7 @@ A great proposal will include: yourself to brainstorm a couple more approaches may give you new ideas or make clear that your initial proposal is the best one -Also feel free to reach out to us on [slack](contact.md#slack) if you want any +Also feel free to reach out to us on [slack](../contact.md#slack) if you want any help/guidance. Thanks so much!! @@ -110,7 +110,7 @@ See our [contributor ladder](./contributor-ladder.md) for more information. ## Reviews Reviewers will be auto-assigned by [Prow](#pull-request-process) from the -[OWNERS](#OWNERS), which acts as suggestions for which `OWNERS` should review +[OWNERS](../OWNERS), which acts as suggestions for which `OWNERS` should review the PR. (OWNERS, your review requests can be viewed at [https://github.com/pulls/review-requested](https://github.com/pulls/review-requested)). @@ -146,7 +146,7 @@ Before a PR can be merged, it must have both `/lgtm` AND `/approve`: - `/lgtm` can be added by ["Reviewers"](https://github.com/tektoncd/community/blob/main/process.md#reviewer), aka anyone in Reviewer team specific to the repo -- `/approve` can be added only by [OWNERS](#owners) +- `/approve` can be added only by [OWNERS](../OWNERS) The merge will happen automatically once the PR has both `/lgtm` and `/approve`, and all tests pass. If you don't want this to happen you should @@ -199,7 +199,7 @@ individual CLA or indicate your affilation with a company that has signed it part of the company, for example often this is managed via the domain your email address). -Members of [the governing board](governance.md) are authorized to administer the +Members of [the governing board](../governance.md) are authorized to administer the CDF CLA via the website and can control which repos it is applied to. ## Postmortems diff --git a/process/contributor-ladder.md b/process/contributor-ladder.md index df53e1dbd..2b4255862 100644 --- a/process/contributor-ladder.md +++ b/process/contributor-ladder.md @@ -29,7 +29,7 @@ who have stopped being anonymous and started being active in project discussions. - Responsibilities: - - Must follow the [Tekton CoC](code-of-conduct.md) + - Must follow the [Tekton CoC](../code-of-conduct.md) - How users can get involved with the community: - Participating in community discussions ([GitHub, Slack, mailing list, etc](/contact.md)) @@ -42,11 +42,11 @@ discussions. ## Contributor Description: A Contributor makes direct contributions to the project and adds -value to it. [Contributions need not be code](#contributions). People at the +value to it. [Contributions need not be code](./README.md#contributions). People at the Contributor level may be new contributors, and they can contribute occasionally. Contributors may be eligible to vote and run in elections. See -[Elections](./governance.md#elections) for more details. +[Elections](../governance.md#elections) for more details. A Contributor must meet the responsibilities of a [Community Participant](#community-participant), plus: @@ -58,7 +58,7 @@ A Contributor must meet the responsibilities of a - Report and sometimes resolve issues - Occasionally submit PRs - Contribute to the documentation - - Participate in [meetings](working-groups.md) + - Participate in [meetings](../working-groups.md) - Answer questions from other community members - Submit feedback on issues and PRs - Test, review, and verify releases and patches @@ -121,7 +121,7 @@ Reviewers have all the rights and responsibilities of an - Responsibilities include: - Proactively help triage and respond to incoming issues (GitHub, Slack, mailing list) - - Following the [reviewing guide](./standards.md) + - Following the [reviewing guide](../standards.md) - Reviewing most Pull Requests against their specific areas of responsibility - Reviewing at least 10 PRs per year - Helping other contributors become reviewers @@ -136,8 +136,8 @@ Reviewers have all the rights and responsibilities of an - Is supportive of new and occasional contributors and helps get useful PRs in shape to commit - Additional privileges: - - May [`/lgtm`](#prow-commands) pull requests. - - Can be allowed to [`/approve`](#prow-commands) pull requests in specific + - May [`/lgtm`](./README.md#prow-commands) pull requests. + - Can be allowed to [`/approve`](./README.md#prow-commands) pull requests in specific sub-directories of a project (by maintainer discretion) - Can recommend and review other contributors to become Reviewers @@ -167,7 +167,7 @@ The process of becoming a Reviewer is: [OWNERS alias](https://www.kubernetes.dev/docs/guide/owners/#owners_aliases)). 2. At least two Reviewers/[Maintainers](#maintainer) of the team that owns that repository or directory approve the PR. -3. Update [org.yaml](./org/org.yaml) to add the new Reviewer to the +3. Update [org.yaml](../org/org.yaml) to add the new Reviewer to the corresponding [GitHub team(s)](https://docs.github.com/en/organizations/organizing-members-into-teams/about-teams). @@ -238,11 +238,11 @@ Process of becoming an Maintainer: 2. The nominee will add a comment to the PR testifying that they agree to all requirements of becoming a Maintainer. 3. A majority of the current Maintainers must then approve the PR. -4. Update [org.yaml](./org/org.yaml) to add the new maintainer to the +4. Update [org.yaml](../org/org.yaml) to add the new maintainer to the corresponding [GitHub team(s)](https://docs.github.com/en/organizations/organizing-members-into-teams/about-teams). -- Each project has a `.maintainers` entry in [`org.yaml`](./org/org.yaml), +- Each project has a `.maintainers` entry in [`org.yaml`](../org/org.yaml), where `` is the name of the GitHub repository. The only exception is `pipeline` whose maintainer team is name `core.maintainers`. @@ -252,7 +252,7 @@ Description: The Tekton Governance committee is the governing body of the Tekton open source project. It's an elected group that represents the contributors to the project, and has an oversight on governance and technical matters. -See [governance.md](governance.md) for requirements, responsibilities, and +See [governance.md](../governance.md) for requirements, responsibilities, and election process. - Additional privileges: @@ -268,7 +268,7 @@ project. - Inactivity is measured by: - Failing to meet role requirements. - - Periods of no [contributions](#contributions) for longer than 4 months + - Periods of no [contributions](./README.md#contributions) for longer than 4 months - Periods of no communication for longer than 2 months - Consequences of being inactive include: - Involuntary removal or demotion @@ -284,7 +284,7 @@ because it protects the community and its deliverables while also opens up opportunities for new contributors to step in. Involuntary removal or demotion is handled through a vote by a majority of the -[Tekton Governing Board](governance.md). +[Tekton Governing Board](../governance.md). ### Stepping Down/Emeritus Process diff --git a/process/projects.md b/process/projects.md index b14bceec7..631ba3903 100644 --- a/process/projects.md +++ b/process/projects.md @@ -5,7 +5,7 @@ Tekton is made up of multiple projects! New projects can be created in three ways: 1. **Experimental repo**: Incubating projects may start off in the [experimental repo](https://github.com/tektoncd/experimental) -so community members can collaborate before [promoting the project to a top level repo](#promotion-from-experimental-to-top-level-repo). +so community members can collaborate before [promoting the project to a top level repo](#promoting-a-project-from-experimental-to-top-level-repo). 2. **Adoption into tektoncd org**: Projects hosted outside of [the `tektoncd` org](https://github.com/tektoncd) may be [moved into the org](#proposing-adoption-of-an-existing-project) and adopted by the tekton community. @@ -18,12 +18,12 @@ Projects can be added to the experimental repo when the [governing committee members](https://github.com/tektoncd/community/blob/main/governance.md) consider them to be potential candidates to be Tekton top level projects, but would like to see more design and discussion around before -[promoting to offical tekton projects](#promotion-from-experimental-to-top-level-repo). +[promoting to offical tekton projects](#promoting-a-project-from-experimental-to-top-level-repo). Don't feel obligated to add a project to the experimental repo if it is not immediately accepted as a top level project: another completely valid path to being a top level project is to iterate on the project in a completely different -repo and org, while [discussing with the Tekton community](contact.md). +repo and org, while [discussing with the Tekton community](../contact.md). ## Project requirements @@ -34,7 +34,7 @@ must: 1. Use the `Apache license 2.0`. 1. Contain and keep up to date the following documentation: - - The [tekton community code of conduct](code-of-conduct.md) + - The [tekton community code of conduct](../code-of-conduct.md) - A `README.md` which introduces the project and points folks to additional docs - A `DEVELOPMENT.md` which explains to new contributors how to ramp up and @@ -45,7 +45,7 @@ must: - Contains any project specific guidelines - Links contributors to the project's DEVELOPMENT.md 1. Use [GitHub templates](https://help.github.com/en/articles/about-issue-and-pull-request-templates) for [Issues](https://help.github.com/en/articles/about-issue-and-pull-request-templates#issue-templates) and [Pull requests](https://help.github.com/en/articles/about-issue-and-pull-request-templates#pull-request-templates). -1. Have its own set of [OWNERS](#owners) who are reponsible for maintaining that +1. Have its own set of [OWNERS](../OWNERS) who are reponsible for maintaining that project. 1. Use the same standard of automation (e.g. continuous integration on PRs), via [the plumbing repo](https://github.com/tektoncd/plumbing). It is the governing board's responsibility to set up infrastructure in the plumbing repo for new projects. diff --git a/process/tep-process.md b/process/tep-process.md index 29b9c245d..b80a51267 100644 --- a/process/tep-process.md +++ b/process/tep-process.md @@ -3,7 +3,7 @@ A Tekton Enhancement Proposal (TEP) is a way to propose, communicate and coordinate on new efforts for the Tekton project. You can read the full details of the project in -[TEP-1](0001-tekton-enhancement-proposal-process.md). +[TEP-1](../teps/0001-tekton-enhancement-proposal-process.md). * [What is a TEP](#what-is-a-tep) * [Creating TEPs](#creating-teps) @@ -61,7 +61,7 @@ This TEP process is related to This proposal attempts to place these concerns within a general framework. -See [TEP-1](0001-tekton-enhancement-proposal-process.md) for more +See [TEP-1](../teps/0001-tekton-enhancement-proposal-process.md) for more details. The TEP `OWNERS` are the **main** owners of the following projects: @@ -78,7 +78,7 @@ The TEP `OWNERS` are the **main** owners of the following projects: ## Creating and Merging TEPs -To create a new TEP, use the [teps script](./tools/README.md): +To create a new TEP, use the [teps script](../teps/tools/README.md): ```shell $ ./teps/tools/teps.py new --title "The title of the TEP" --author nick1 --author nick2 @@ -122,11 +122,11 @@ design and update the missing part in follow-up pull requests which moves the TE ### Approval requirements -Reviewers should use [`/approve`](../process.md#prow-commands) to indicate that they approve +Reviewers should use [`/approve`](../process/README.md#prow-commands) to indicate that they approve of the PR being merged. TEP must be approved by ***at least two owners*** from different companies. -Owners are people who are [maintainers](../process.md#maintainer) for the community repo. +Owners are people who are [maintainers](../process/contributor-ladder.md#maintainer) for the community repo. This should prevent a company from *force pushing* a TEP (and thus a feature) in the tektoncd projects. @@ -163,10 +163,10 @@ TEP collaborators are permitted to be reviewers. ### Merging TEP PRs Once all assigned reviewers have approved the PR, the PR author can reach out to one of the assigned reviewers -or another ["reviewer"](../process.md#reviewer) for the community repo to merge the PR. -The reviewer can merge the PR by adding a [`/lgtm` label](../process.md#prow-commands). +or another ["reviewer"](../process/contributor-ladder.md#reviewer) for the community repo to merge the PR. +The reviewer can merge the PR by adding a [`/lgtm` label](../process/README.md#prow-commands). - If a contributor adds "lgtm" before all assignees have had the chance to review, - add a ["hold"](../process.md#prow-commands) to prevent the PR from being merged until then. + add a ["hold"](../process/README.md#prow-commands) to prevent the PR from being merged until then. - Note: automation prevents the PR author from merging their own PR. If the TEP has undergone substantial changes since any reviewers have approved it, the author diff --git a/roadmap.md b/roadmap.md index d054d0110..31af52712 100644 --- a/roadmap.md +++ b/roadmap.md @@ -31,7 +31,7 @@ What this vision looks like differs across different [users](user-profiles.md): because building on top of Tekton means they don't have to re-invent the wheel and out of the box they get scalable, serverless cloud native execution * **Engineers who need CI/CD**: (aka all software engineers!) These users - (including [Pipeline and Task authors](user-profiles.md#2-pipeline-and-task-authors) + (including [Pipeline and Task authors](user-profiles.md#1-pipeline-and-task-authors) and [Pipeline and Task users](user-profiles.md#2-pipeline-and-task-users) will benefit from the rich high quality catalog of reusable components: diff --git a/standards.md b/standards.md index e1062e41d..c38514150 100644 --- a/standards.md +++ b/standards.md @@ -16,7 +16,7 @@ Each Pull Request is expected to meet the following expectations around: * [Pull Request Description](#pull-request-description) * [Release Notes](#release-notes) -* [Commit Messages](#commits) +* [Commit Messages](#commit-messages) * [Example Commit Message](#example-commit-message) * [Small Pull Requests](#small-pull-requests) * [Incremental Feature Development](#incremental-feature-development) diff --git a/teps/0001-tekton-enhancement-proposal-process.md b/teps/0001-tekton-enhancement-proposal-process.md index 94708e11a..eff42b03d 100644 --- a/teps/0001-tekton-enhancement-proposal-process.md +++ b/teps/0001-tekton-enhancement-proposal-process.md @@ -139,7 +139,7 @@ The following model indentifies the responsible parties for TEPs: | **Workstream** | **Driver** | **Approver** | **Contributor** | **Informed** | -| --- | --- | --- | --- | --- | +|-------------------------|---------------------|--------------------------|------------------------------------------------------|--------------| | TEP Process Stewardship | Tekton Contributors | Tekton Governing members | Tekton Contributors | Community | | Enhancement delivery | Enhancement Owner | Project(s) Owners | Enhancement Implementer(s) (may overlap with Driver) | Community | diff --git a/teps/0005-tekton-oci-bundles.md b/teps/0005-tekton-oci-bundles.md index 248da78eb..c3d02a6e8 100644 --- a/teps/0005-tekton-oci-bundles.md +++ b/teps/0005-tekton-oci-bundles.md @@ -19,7 +19,7 @@ status: implemented - [Contract](#contract) - [API](#api) - [User Stories (optional)](#user-stories-optional) - - [Versioned Tasks and Pipelines and Pipeline-as-code](#versioned-s-and-s-and-pipeline-as-code) + - [Versioned Tasks and Pipelines and Pipeline-as-code](#versioned-tasks-and-pipelines-and-pipeline-as-code) - [Shipping catalog resources as OCI images](#shipping-catalog-resources-as-oci-images) - [Tooling](#tooling) - [Risks and Mitigations](#risks-and-mitigations) diff --git a/teps/0007-conditions-beta.md b/teps/0007-conditions-beta.md index e57136604..45b6feb17 100644 --- a/teps/0007-conditions-beta.md +++ b/teps/0007-conditions-beta.md @@ -40,7 +40,7 @@ tags, and then generate with `hack/update-toc.sh`. - [Efficiency](#efficiency-2) - [CelRun Custom Task](#celrun-custom-task) - [Expression Language Interceptor](#expression-language-interceptor) - - [Status](#status-2) + - [Status](#status) - [Minimal Skipped](#minimal-skipped) - [ConditionSucceeded](#conditionsucceeded) - [ConditionSkipped](#conditionskipped) @@ -658,7 +658,7 @@ To make it flexible, similarly to Triggers which uses language interceptors that ### Status -#### Minimal Skipped +#### Minimal Skipped Add `Skipped Tasks` section to the `PipelineRunStatus` that contains a list of `SkippedTasks` that contains a `Name` field which has the `PipelineTaskName`. The `WhenExpressions` that made the `Task` skipped can be found in the `PipelineSpec`, the `Parameter` variables used can be found in the `PipelineSpec` and the `Results` used from previous `Tasks` can be found in the relevant `TaskRun`. It may be more work for users to reverse-engineer to identify why a `Task` was skipped, but gives us the benefit of significantly reducing the `PipelineRunStatus` compared to what we currently have with `Conditions`. diff --git a/teps/0008-support-knative-service-for-triggers-eventlistener-pod.md b/teps/0008-support-knative-service-for-triggers-eventlistener-pod.md index 6643c0540..2df231504 100644 --- a/teps/0008-support-knative-service-for-triggers-eventlistener-pod.md +++ b/teps/0008-support-knative-service-for-triggers-eventlistener-pod.md @@ -25,7 +25,7 @@ status: implemented - [Usage examples](#usage-examples) - [Default EventListener yaml](#default-eventlistener-yaml) - [Kubernetes Based](#kubernetes-based) - - [Knative Service OR any CRD](#knative-service-) + - [Knative Service OR any CRD](#knative-service-or-any-crd) - [Design Details](#design-details) - [Contract](#contract) - [Spec](#spec) @@ -142,7 +142,7 @@ spec: name: pipeline-template ``` -#### Kubernetes Based +#### Kubernetes Based This is exactly the same whatever we have right now with default. The reason to move `serviceAccountName`, `podTemplate`, to `kubernetesResource` field is because those are part of [WithPodSpec{}](https://github.com/knative/pkg/blob/master/apis/duck/v1/podspec_types.go#L49) duck type diff --git a/teps/0010-optional-workspaces.md b/teps/0010-optional-workspaces.md index 5d506cb61..10ecf2765 100644 --- a/teps/0010-optional-workspaces.md +++ b/teps/0010-optional-workspaces.md @@ -28,7 +28,7 @@ status: implemented - [Story 6](#story-6) - [Risks and Mitigations](#risks-and-mitigations) - [Design Details](#design-details) - - [Example: git-clone Catalog Task](#example--catalog-task) + - [Example: git-clone Catalog Task](#example-git-clone-catalog-task) - [Test Plan](#test-plan) - [Drawbacks](#drawbacks) - [Alternatives](#alternatives) diff --git a/teps/0014-step-timeout.md b/teps/0014-step-timeout.md index 623e1d28b..c6726e8ea 100644 --- a/teps/0014-step-timeout.md +++ b/teps/0014-step-timeout.md @@ -19,7 +19,7 @@ status: implemented - [Risks and Mitigations](#risks-and-mitigations) - [Design Details](#design-details) - [Caveats](#caveats) - - [Resolution of a Step Timeout](#resolution-of-a--timeout) + - [Resolution of a Step Timeout](#resolution-of-a-step-timeout) - [Test Plan](#test-plan) - [References](#references) diff --git a/teps/0022-trigger-immutable-input.md b/teps/0022-trigger-immutable-input.md index ec2ef8712..c3ca00ff7 100644 --- a/teps/0022-trigger-immutable-input.md +++ b/teps/0022-trigger-immutable-input.md @@ -25,7 +25,7 @@ status: implemented - [Drawbacks](#drawbacks) - [Payload redaction](#payload-redaction) - [Alternatives](#alternatives) - - [Introduce input field](#introduce--field) + - [Introduce input field](#introduce-input-field) - [Keep mutable behavior](#keep-mutable-behavior) - [Upgrade and Migration Strategy](#upgrade-and-migration-strategy) - [Implementation PRs](#implementation-prs) diff --git a/teps/0024-embedded-trigger-templates.md b/teps/0024-embedded-trigger-templates.md index 29ac88ba9..d7c334d67 100644 --- a/teps/0024-embedded-trigger-templates.md +++ b/teps/0024-embedded-trigger-templates.md @@ -87,7 +87,7 @@ There are two changes proposed to the Trigger spec: ref: "my-tt" ``` -## Upgrade & Migration Strategy +## Upgrade & Migration Strategy For the `name` to `ref` change, we'll make the upgrade process backwards compatible: @@ -100,7 +100,7 @@ compatible: 1. In a future release, we'll remove the `name` field from the spec. -## References +## References 1. GitHub issue: https://github.com/tektoncd/triggers/issues/616 diff --git a/teps/0025-hermekton.md b/teps/0025-hermekton.md index 2dfe0e1f9..7cefa42f8 100644 --- a/teps/0025-hermekton.md +++ b/teps/0025-hermekton.md @@ -98,14 +98,14 @@ This currently holds just a single bool, but could be expanded in the future. See [this rationale](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#primitive-types) in the k8s API style guide for why we introduce a new type. -| Object | Field | Description | -| --- | --- | --- | -| Task | spec.steps[*].ExecutionMode | Whether or not Steps of this Task should happen hermetically. This can be overridden on the TaskRun | -| Task | spec.ExecutionMode | Whether or not TaskRuns of this Task should happen hermetically. This can be overridden on the TaskRun | -| TaskRun | spec.ExecutionMode | Whether or not this TaskRun will be run hermetically. This can be used to override the value on the Task | -| Pipeline | spec.ExecutionMode |Whether or not the **entire** pipeline should run hermetically. This can be overridden on the PipelineRun | -| PipelineRun | spec.ExecutionMode | Whether or not the **entire** PipelineRun will be run hermetically. This can be used to override the default value on the Pipeline, but can be overridden for a specific TaskRun below. -| PipelineRun | spec.TaskRunSpecs.ExecutionMode | Whether or not this specific TaskRrun should be run hermetically during a PipelineRun. This overrides the Task, Pipeline and PipelineRun defaults. | +| Object | Field | Description | +|-------------|---------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Task | spec.steps[*].ExecutionMode | Whether or not Steps of this Task should happen hermetically. This can be overridden on the TaskRun | +| Task | spec.ExecutionMode | Whether or not TaskRuns of this Task should happen hermetically. This can be overridden on the TaskRun | +| TaskRun | spec.ExecutionMode | Whether or not this TaskRun will be run hermetically. This can be used to override the value on the Task | +| Pipeline | spec.ExecutionMode | Whether or not the **entire** pipeline should run hermetically. This can be overridden on the PipelineRun | +| PipelineRun | spec.ExecutionMode | Whether or not the **entire** PipelineRun will be run hermetically. This can be used to override the default value on the Pipeline, but can be overridden for a specific TaskRun below. | +| PipelineRun | spec.TaskRunSpecs.ExecutionMode | Whether or not this specific TaskRun should be run hermetically during a PipelineRun. This overrides the Task, Pipeline and PipelineRun defaults. | This execution mode will be applied to all **user-specified** containers, including Steps and Sidecars. Tekton-injected ones (init containers, resource containers) will not run with this policy. diff --git a/teps/0026-interceptor-plugins.md b/teps/0026-interceptor-plugins.md index f8ac02674..db3b2580f 100644 --- a/teps/0026-interceptor-plugins.md +++ b/teps/0026-interceptor-plugins.md @@ -406,7 +406,7 @@ data: * Extensibility: Having a CRD allows us to add new features to Interceptors down the line. For instance, we might want to support a `tlsClientConfig` field that allows the listener to communicate with the interceptor over TLS (similar to admission webhook's [client config](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#webhookclientconfig-v1-admissionregistration-k8s-io)). It also leaves open the possibility in the future of support alternate transports (e.g. GRPC instead of HTTP). -### Use go-plugin for built-in interceptors +### Use go-plugin for built-in interceptors We'd leverage a library like HashiCorp's go-plugin for built-in interceptors while keeping the existing webhhok interface for extensibility. In this model, the interceptor will run as separate processes but within the same pod. They will communicate with the listener over RPC. So, in many ways this is similar to the proposal but the key difference is that everything runs in the same pod. This is a reasonable alternative. But it does lead us to having to maintain two different interceptor models - one for plugin one for webhooks. The main implications are: * We can keep using the same service account model as before. @@ -420,7 +420,7 @@ We'd leverage a library like HashiCorp's go-plugin for built-in interceptors whi We'll package in built-in interceptors as we do now, but an operator can turn some of the interceptors "off" using a flag or a config map. This is similar to Kubernetes where some admission controllers are [compiled in](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/) and can only be [enabled ](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#how-do-i-turn-on-an-admission-controller)or disabled. Uses can extend by writing and registering their own [dynamic admission webhooks](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) (similar to webhook interceptors). -### Implement built-in interceptors using the current webhook interface +### Implement built-in interceptors using the current webhook interface In this model, the built-in interceptors will still be implemented as separate services but they will use the existing Webhook Interface. * Passing interceptors params (e.g. the CEL filter expression) would be much more verbose. We'd have to invent extra headers to pass this information through. As an example, this is what an CEL interceptor usage would look like: diff --git a/teps/0027-https-connection-to-triggers-eventlistener.md b/teps/0027-https-connection-to-triggers-eventlistener.md index ed238b2ce..1b5ed6149 100644 --- a/teps/0027-https-connection-to-triggers-eventlistener.md +++ b/teps/0027-https-connection-to-triggers-eventlistener.md @@ -150,6 +150,6 @@ At high level below are few implementation details * Using thirdparty solutions like service mesh. * Writing simple sidecar which inject tls certs into the eventlistener pod. -## References +## References 1. GitHub issue: https://github.com/tektoncd/triggers/issues/650 2. Implementation: https://github.com/tektoncd/triggers/pull/819 diff --git a/teps/0028-task-execution-status-at-runtime.md b/teps/0028-task-execution-status-at-runtime.md index 02333d3fb..ad3b53a48 100644 --- a/teps/0028-task-execution-status-at-runtime.md +++ b/teps/0028-task-execution-status-at-runtime.md @@ -80,11 +80,11 @@ This variable is instantiated and available at the runtime. In the following exa fi ``` -| State | Description | -| ----- | ----------- | -| `Succeeded` | The `pipelineTask` was successful i.e. a respective pod was created and completed successfully. The `pipelineTask` had a `taskRun` with `Succeeded` `ConditionType` and `True` `ConditionStatus`. | -| `Failed` | The `pipelineTask` failed i.e. a respective pod was created but exited with error. The `pipelineTask` has a `taskRun` with `Succeeded` `ConditionType`, `False` `ConditionStatus` and have exhausted all the retries. | -| `None` | no execution state available either (1) the `pipeline` stopped executing `dag` tasks before it could get to this task i.e. this task was not started/executed or (2) the `pipelineTask` is `skipped` because of `when expression` or one of the parent tasks was `skipped`. It is part of `pipelineRun.Status.SkippedTasks`. | +| State | Description | +|-------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `Succeeded` | The `pipelineTask` was successful i.e. a respective pod was created and completed successfully. The `pipelineTask` had a `taskRun` with `Succeeded` `ConditionType` and `True` `ConditionStatus`. | +| `Failed` | The `pipelineTask` failed i.e. a respective pod was created but exited with error. The `pipelineTask` has a `taskRun` with `Succeeded` `ConditionType`, `False` `ConditionStatus` and have exhausted all the retries. | +| `None` | no execution state available either (1) the `pipeline` stopped executing `dag` tasks before it could get to this task i.e. this task was not started/executed or (2) the `pipelineTask` is `skipped` because of `when expression` or one of the parent tasks was `skipped`. It is part of `pipelineRun.Status.SkippedTasks`. | ### User Stories diff --git a/teps/0029-step-workspaces.md b/teps/0029-step-workspaces.md index 29e426525..767e65249 100644 --- a/teps/0029-step-workspaces.md +++ b/teps/0029-step-workspaces.md @@ -15,9 +15,9 @@ status: implemented - [Goals](#goals) - [Requirements](#requirements) - [Proposal](#proposal) - - [Add workspaces to Steps](#add--to-) - - [Add workspaces to Sidecars](#add--to--1) - - [Allow workspaces in Steps and Sidecars to have their own mountPath](#allow--in--and--to-have-their-own-) + - [Add workspaces to Steps](#add-workspaces-to-steps) + - [Add workspaces to Sidecars](#add-workspaces-to-sidecars) + - [Allow workspaces in Steps and Sidecars to have their own mountPath](#allow-workspaces-in-steps-and-sidecars-to-have-their-own-mountpath) - [User Stories](#user-stories) - [Story 1](#story-1) - [Story 2](#story-2) diff --git a/teps/0033-tekton-feature-gates.md b/teps/0033-tekton-feature-gates.md index 64a58f544..f4f1b1b33 100644 --- a/teps/0033-tekton-feature-gates.md +++ b/teps/0033-tekton-feature-gates.md @@ -233,20 +233,20 @@ This allows administrators to opt into allowing their users to use alpha and bet Since we do not yet have any `v1` CRDs, the behavior will look like: -| Feature Versions -> | beta | alpha | -| --- | --- | --- | -| stable | x | | -| alpha | x | x | +| Feature Versions -> | beta | alpha | +|---------------------|------|-------| +| stable | x | | +| alpha | x | x | x == "**enabled**" Once we have `v1` CRDs it will become: | Feature Versions -> | v1 | beta | alpha | -| --- | --- | --- | --- | -| stable | x | | | -| beta | x | x | | -| alpha | x | x | x | +|---------------------|----|------|-------| +| stable | x | | | +| beta | x | x | | +| alpha | x | x | x | For example: diff --git a/teps/0040-ignore-step-errors.md b/teps/0040-ignore-step-errors.md index 460a99cd6..acd7624c3 100644 --- a/teps/0040-ignore-step-errors.md +++ b/teps/0040-ignore-step-errors.md @@ -24,7 +24,7 @@ authors: - [Advantages](#advantages) - [Single Source of Truth](#single-source-of-truth) - [Alternatives](#alternatives) - - [A bool flag](#a--flag) + - [A bool flag](#a-bool-flag) - [exitCode set to 0 through 255](#exitcode-set-to-0-through-255) - [Future Work](#future-work) - [Step exit code as a task result](#step-exit-code-as-a-task-result) diff --git a/teps/0042-taskrun-breakpoint-on-failure.md b/teps/0042-taskrun-breakpoint-on-failure.md index c0ef8261c..d52e18e16 100644 --- a/teps/0042-taskrun-breakpoint-on-failure.md +++ b/teps/0042-taskrun-breakpoint-on-failure.md @@ -113,7 +113,7 @@ To exit a step which has been paused upon failure, the step would wait on a file would unpause and exit the step container. eg: Step 0 fails and is paused. Writing `0.breakpointexit` in `/tekton/tools` would unpause and exit the step container. -### Debug Environment Additions +### Debug Environment Additions #### Mounts diff --git a/teps/0044-data-locality-and-pod-overhead-in-pipelines.md b/teps/0044-data-locality-and-pod-overhead-in-pipelines.md index be961a77f..8c182d86c 100644 --- a/teps/0044-data-locality-and-pod-overhead-in-pipelines.md +++ b/teps/0044-data-locality-and-pod-overhead-in-pipelines.md @@ -223,7 +223,7 @@ Some functionality required to run multiple Tasks in a pod could be supported wi some functionality would require changes to this code, and some functionality may not be possible at all. * Functionality that could be supported with current pod logic (e.g. by - [translating a Pipeline directly to a TaskRun](#pipeline-executed-as-taskrun)): + [translating a Pipeline directly to a TaskRun](#pipeline-executed-as-a-taskrun)): * Sequential tasks (specified using [`runAfter`](https://github.com/tektoncd/pipeline/blob/main/docs/pipelines.md#using-the-runafter-parameter)) * [Parallel tasks](https://github.com/tektoncd/pipeline/blob/main/docs/pipelines.md#configuring-the-task-execution-order) * [String params](https://github.com/tektoncd/pipeline/blob/main/docs/pipelines.md#specifying-parameters) @@ -259,7 +259,7 @@ some functionality would require changes to this code, and some functionality ma * [Custom tasks](https://github.com/tektoncd/pipeline/blob/main/docs/pipelines.md#using-custom-tasks) - the pod would need to be able to create and watch Custom Tasks, or somehow lean on the Pipelines controller to do this * Functionality that might not be possible (i.e. constrained by pods themselves): - * Any dynamically created TaskRuns. This is discussed in more detail [below](#dynamically-created-tasks-in-pipelines). + * Any dynamically created TaskRuns. This is discussed in more detail [below](#dynamically-created-taskruns-in-pipelines). * Running each Task with a different `ServiceAccount` - the pod has one ServiceAccount as a whole (See also [functionality supported by experimental Pipeline to TaskRun](https://github.com/tektoncd/experimental/tree/main/pipeline-to-taskrun#supported-pipeline-features)) @@ -800,7 +800,7 @@ but lends itself less well to adding authoring time configuration later. ### Controller option to execute Pipelines in a pod In this option, the Tekton controller can be configured to always execute Pipelines inside one pod. -This would require similar functionality to the [pipeline in a pod](#pipeline-in-a-pod-plus-pipelines-in-pipelines), +This would require similar functionality to the [pipeline in a pod](#pipeline-functionality-supported-in-pods), but provide less flexibility to Task and Pipeline authors, as only cluster administrators will be able to control scheduling. ### TaskRun controller allows Tasks to contain other Tasks @@ -915,7 +915,7 @@ TaskRuns in the same pod. The controller would be responsible for reconciling bo created from the TaskGroup. The controller would need to determine how many TaskRuns are needed when the TaskGroup is first reconciled, due to -[limitations associated with dynamically creating Tasks](#dynamically-created-tasks-in-pipelines). +[limitations associated with dynamically creating Tasks](#dynamically-created-taskruns-in-pipelines). When the TaskGroup is first reconciled, it would create all TaskRuns needed, with those that are not ready to execute marked as "pending", and a pod with one container per TaskRun. The TaskGroup would store references to any TaskRuns created, and Task statuses would be stored on the TaskRuns. diff --git a/teps/0046-finallytask-execution-post-timeout.md b/teps/0046-finallytask-execution-post-timeout.md index 20146806f..3be0d6694 100644 --- a/teps/0046-finallytask-execution-post-timeout.md +++ b/teps/0046-finallytask-execution-post-timeout.md @@ -107,11 +107,10 @@ spec: The finally task runs after the task completion and both execute normally. -| NAME | TASK NAME | STARTED | DURATION | STATUS | -|----------------------------------------------------------------|------------------|----------------|------------|------------------------| -| ∙ hello-world-pipeline-run-with-timeout-task2-kxtc6 | task2 | 19 seconds ago | 7 seconds | Succeeded | -| ∙ hello-world-pipeline-run-with-timeout-task1-bqmzz | task1 | 35 seconds ago | 16 seconds | Succeeded | -| | | | | | +| NAME | TASK NAME | STARTED | DURATION | STATUS | +|-----------------------------------------------------|-----------|----------------|------------|-----------| +| ∙ hello-world-pipeline-run-with-timeout-task2-kxtc6 | task2 | 19 seconds ago | 7 seconds | Succeeded | +| ∙ hello-world-pipeline-run-with-timeout-task1-bqmzz | task1 | 35 seconds ago | 16 seconds | Succeeded | Now if we change the task script in order to have it exceed its timeout (30s), we get the following status report: @@ -120,7 +119,6 @@ Now if we change the task script in order to have it exceed its timeout (30s), w |-----------------------------------------------------|-----------|----------------|------------|------------------------| | ∙ hello-world-pipeline-run-with-timeout-task2-44tsb | task2 | 8 seconds ago | 5 seconds | Succeeded | | ∙ hello-world-pipeline-run-with-timeout-task1-wgcq7 | task1 | 38 seconds ago | 30 seconds | Failed(TaskRunTimeout) | -| | | | | | The finally task still executes after the task failure. @@ -133,8 +131,6 @@ Finally if we reduce the pipelinerun timeout to 10s, our status report shows: | NAME | TASK NAME | STARTED | DURATION | STATUS | |-----------------------------------------------------|-----------|---------------|------------|------------------------| | ∙ hello-world-pipeline-run-with-timeout-task1-q7fw4 | task1 | 2 minutes ago | 30 seconds | Failed(TaskRunTimeout) | -| | | | | | -| | | | | | The pipelinerun timeout take precedence over the task timeout. After 10s the task fails... And the finally task does not get the chance to execute. diff --git a/teps/0048-task-results-without-results.md b/teps/0048-task-results-without-results.md index 2d076e6e5..72b805fa5 100644 --- a/teps/0048-task-results-without-results.md +++ b/teps/0048-task-results-without-results.md @@ -19,12 +19,12 @@ status: implementable - [Requirements](#requirements) - [Use Cases](#use-cases) - [Consuming task results from the conditional tasks](#consuming-task-results-from-the-conditional-tasks) - - [Pipeline Results from the conditional tasks](#-from-the-conditional-tasks) - - [Task claiming to produce Results fails if it doesn't produces](#task-claiming-to-produce--fails-if-it-doesnt-produces) + - [Pipeline Results from the conditional tasks](#pipeline-results-from-the-conditional-tasks) + - [Task claiming to produce Results fails if it doesn't produces](#task-claiming-to-produce-results-fails-if-it-doesnt-produces) - [Proposal](#proposal) - - [Consuming Task Result from the conditional tasks](#consuming---from-the-conditional-tasks) - - [Pipeline Results from the conditional tasks](#-from-the-conditional-tasks-1) - - [Task claiming to produce Result fails if it doesn't produces](#-claiming-to-produce--fails-if-it-doesnt-produces) + - [Consuming Task Result from the conditional tasks](#consuming-task-results-from-the-conditional-tasks) + - [Pipeline Results from the conditional tasks](#pipeline-results-from-the-conditional-tasks-1) + - [Task claiming to produce Result fails if it doesn't produces](#task-claiming-to-produce-result-fails-if-it-doesnt-produces) - [Test Plan](#test-plan) - [Alternatives](#alternatives) - [Declaring Results as Optional](#declaring-results-as-optional) @@ -708,7 +708,7 @@ produce `Results` as discussed in [tektoncd/pipeline#3497](https://github.com/te It does not address the use cases for providing default `Results` that can be consumed in subsequent `Tasks`. -## Future Work +## Future Work Determine if we need default `Results` declared at runtime in the future, and how we can support that. diff --git a/teps/0049-aggregate-status-of-dag-tasks.md b/teps/0049-aggregate-status-of-dag-tasks.md index cbf8cf084..a142e603c 100644 --- a/teps/0049-aggregate-status-of-dag-tasks.md +++ b/teps/0049-aggregate-status-of-dag-tasks.md @@ -104,12 +104,12 @@ finally: values: ["Failed"] ``` -| State | Description | -| ----- | ----------- | -| `Succeeded` | All `dag` tasks have succeeded. | -| `Failed` | Any one of the `dag` task failed. | -| `Completed` | All `dag` tasks completed successfully including one or more skipped tasks. | -| `None` | No aggregate execution status available (i.e. none of the above) because some of the tasks are still pending or running or cancelled or timed out. | +| State | Description | +|-------------|----------------------------------------------------------------------------------------------------------------------------------------------------| +| `Succeeded` | All `dag` tasks have succeeded. | +| `Failed` | Any one of the `dag` task failed. | +| `Completed` | All `dag` tasks completed successfully including one or more skipped tasks. | +| `None` | No aggregate execution status available (i.e. none of the above) because some of the tasks are still pending or running or cancelled or timed out. | `$(tasks.status)` is not accessible in any `dag` task but only accessible in a `finally` task. The `pipeline` creation will fail with the validation error if `$(tasks.status)` is used in any `dag` task. diff --git a/teps/0053-nested-triggers.md b/teps/0053-nested-triggers.md index 49ea93ccf..a676a1bf9 100644 --- a/teps/0053-nested-triggers.md +++ b/teps/0053-nested-triggers.md @@ -30,7 +30,7 @@ authors: - [Advantages](#advantages-2) - [Disadvantages](#disadvantages-2) - [Implementation Decision](#implementation-decision) - - [Example triggerGroup Configuration](#example--configuration) + - [Example triggerGroup Configuration](#example-triggerGroup-configuration) - [Implementation PRs](#implementation-prs) - [References](#references) diff --git a/teps/0056-pipelines-in-pipelines.md b/teps/0056-pipelines-in-pipelines.md index e1b3189e0..9accf7f33 100644 --- a/teps/0056-pipelines-in-pipelines.md +++ b/teps/0056-pipelines-in-pipelines.md @@ -46,10 +46,10 @@ see-also: - [Future Work](#future-work) - [Runtime Specification](#runtime-specification) - [Alternatives](#alternatives) - - [Specification - Group PipelineRef and PipelineSpec](#specification---group--and-) - - [Specification - Use PipelineRunSpec in PipelineTask](#specification---use--in-) - - [Specification - Reorganize PipelineTask](#specification---reorganize-) - - [Runtime specification - provide overrides for PipelineRun](#runtime-specification---provide-overrides-for-) + - [Specification - Group PipelineRef and PipelineSpec](#specification---group-pipelineref-and-pipelinespec) + - [Specification - Use PipelineRunSpec in PipelineTask](#specification---use-pipelinerunspec-in-pipelinetask) + - [Specification - Reorganize PipelineTask](#specification---reorganize-pipelinetask) + - [Runtime specification - provide overrides for PipelineRun](#runtime-specification---provide-overrides-for-pipelinerun) - [Runtime specification - provide overrides for all runtime types](#runtime-specification---provide-overrides-for-all-runtime-types) - [Status - Require Minimal Status](#status---require-minimal-status) - [Status - Populate Embedded and Minimal Status](#status---populate-embedded-and-minimal-status) @@ -543,7 +543,7 @@ spec: ### Results -#### Consuming Results +#### Consuming Results `Pipelines` in `Pipelines` will consume `Results`, in the same way as `Tasks` in `Pipelines`. @@ -632,7 +632,7 @@ spec: name: notification ``` -### Workspaces +### Workspaces `PipelineTasks` with `Pipelines` can reference `Workspaces`, in the same way as `PipelineTasks` with `Tasks`. In these case, the `Workspaces` from the parent `PipelineRun` will be bound to the child `PipelineRun`. @@ -716,7 +716,7 @@ map to `timeouts.pipeline` in the child `PipelineRun`. If users need finer-grained timeouts for child `PipelineRuns` as those supported in parent `PipelineRuns`, we can explore supporting them in future work - see [possible solution](#runtime-specification---provide-overrides-for-pipelinerun). -### Matrix +### Matrix Users can fan out `PipelineTasks` with `Tasks` and `Custom Tasks` into multiple `TaskRuns` and `Runs` using `Matrix`. In the same way, users can fan out `PipelineTasks` with `Pipelines` into multiple child `PipelineRuns`. This provides diff --git a/teps/0058-graceful-pipeline-run-termination.md b/teps/0058-graceful-pipeline-run-termination.md index 2bc8fa87e..5bb1a5186 100644 --- a/teps/0058-graceful-pipeline-run-termination.md +++ b/teps/0058-graceful-pipeline-run-termination.md @@ -146,10 +146,10 @@ No impact on performance. In this proposal the list of statuses users can set in `spec.status` is extended to: -  | don't run finally tasks | run finally tasks ------- | ------------------------ | ----------------------------- -cancel | Cancelled | CancelledRunFinally -stop | | StoppedRunFinally +|   | don't run finally tasks | run finally tasks | +|--------|-------------------------|---------------------| +| cancel | Cancelled | CancelledRunFinally | +| stop | | StoppedRunFinally | The existing state "PipelineRunCancelled" is deprecated and replaced by "Cancelled". diff --git a/teps/0060-remote-resource-resolution.md b/teps/0060-remote-resource-resolution.md index 7ffcf3e93..c47d1a487 100644 --- a/teps/0060-remote-resource-resolution.md +++ b/teps/0060-remote-resource-resolution.md @@ -19,19 +19,19 @@ authors: - [Use Cases (optional)](#use-cases-optional) - [Requirements](#requirements) - [Proposal](#proposal) - - [1. Establish a new syntax in Tekton Pipelines' taskRef and pipelineRef structs for remote resources](#1-establish-a-new-syntax-in-tekton-pipelines--and--structs-for-remote-resources) - - [2. Implement procedure for resolution: ResolutionRequest CRD](#2-implement-procedure-for-resolution--crd) + - [1. Establish a new syntax in Tekton Pipelines' taskRef and pipelineRef structs for remote resources](#1-establish-a-new-syntax-in-tekton-pipelines-taskref-and-pipelineref-structs-for-remote-resources) + - [2. Implement procedure for resolution: ResolutionRequest CRD](#2-implement-procedure-for-resolution-resolutionrequest-crd) - [3. Create a new Tekton Resolution project](#3-create-a-new-tekton-resolution-project) - [Risks and Mitigations](#risks-and-mitigations) - [Relying on a CRD as storage for in-lined resolved data](#relying-on-a-crd-as-storage-for-in-lined-resolved-data) - [Changing the way it works means potentially rewriting multiple resolvers](#changing-the-way-it-works-means-potentially-rewriting-multiple-resolvers) - [Data Integrity](#data-integrity) - [User Experience (optional)](#user-experience-optional) - - [Simple flow: user submits TaskRun using public catalog Task](#simple-flow-user-submits--using-public-catalog-) + - [Simple flow: user submits TaskRun using public catalog Task](#simple-flow-user-submits-taskrun-using-public-catalog-task) - [Performance](#performance) - [Design Details](#design-details) - [New Pipelines syntax schema](#new-pipelines-syntax-schema) - - [ResolutionRequest objects](#-objects) + - [ResolutionRequest objects](#resolutionrequest-objects) - [YAML Examples](#yaml-examples) - [Resolver specifics](#resolver-specifics) - [In-Cluster Resolver](#in-cluster-resolver) @@ -63,7 +63,7 @@ authors: - [Applicability For Other Tekton Projects](#applicability-for-other-tekton-projects-4) - [A CRD That Wraps Tekton's Existing Types](#a-crd-that-wraps-tektons-existing-types) - [Applicability For Other Tekton Projects](#applicability-for-other-tekton-projects-5) - - [Use a pending Status on PipelineRuns/TaskRuns With Embedded Resolution Info](#use-a--status-on-pipelinerunstaskruns-with-embedded-resolution-info) + - [Use a pending Status on PipelineRuns/TaskRuns With Embedded Resolution Info](#use-a-pending-status-on-pipelinerunstaskruns-with-embedded-resolution-info) - [Applicability For Other Tekton Projects](#applicability-for-other-tekton-projects-6) - [Use an Admission Controller to Perform Resolution](#use-an-admission-controller-to-perform-resolution) - [Applicability For Other Tekton Projects](#applicability-for-other-tekton-projects-7) diff --git a/teps/0072-results-json-serialized-records.md b/teps/0072-results-json-serialized-records.md index 63c21395e..04ddd4ee3 100644 --- a/teps/0072-results-json-serialized-records.md +++ b/teps/0072-results-json-serialized-records.md @@ -321,14 +321,14 @@ features in CEL will no longer work out of the box - now an opaque byte string) - users will need to reference the `type` field: | Before | After | - | ---------------------------------------------------- | ----------------------------------------------------- | + |------------------------------------------------------|-------------------------------------------------------| | type(record.data) == tekton.pipeline.v1beta1.TaskRun | record.data.type == "tekton.pipeline.v1beta1.TaskRun" | - Underlying data cannot be directly referenced - users will need to use the `value` field: | Before | After | - | -------------------------------- | -------------------------------------- | + |----------------------------------|----------------------------------------| | record.data.name.contains("foo") | record.data.value.name.contains("foo") | It is possible that we might be able to replicate some of the well-known Any diff --git a/teps/0074-deprecate-pipelineresources.md b/teps/0074-deprecate-pipelineresources.md index ac9385e29..f68bad242 100644 --- a/teps/0074-deprecate-pipelineresources.md +++ b/teps/0074-deprecate-pipelineresources.md @@ -19,7 +19,7 @@ authors: - [Requirements](#requirements) - [Proposal](#proposal) - [Features that will replace PipelineResources functionality](#features-that-will-replace-pipelineresources-functionality) - - [New repo: tektoncd/images](#new-repo-tektoncdimages) + - [Images used in PipelineResources](#images-used-in-pipelineresources) - [Risks and Mitigations](#risks-and-mitigations) - [User Experience](#user-experience) - [Design Details](#design-details) @@ -46,14 +46,14 @@ This TEP proposes deprecating the CRD [PipelineResource](https://github.com/tekt in its current form, and addressing each problem PipelineResources were solving with specific features (see [Design details](#design-details) for more depth on how the listed replacements address the feature): -| PipelineResources Feature| Replacement | -|---|---| -| Augmenting Tasks with steps that execute on the same pod | [TEP-0044 Decoupling Task composition from scheduling](https://github.com/tektoncd/community/blob/main/teps/0044-decouple-task-composition-from-scheduling.md) | -| Automatic storage provisioning | [volumeClaimTemplates](https://github.com/tektoncd/pipeline/blob/main/docs/workspaces.md#volumeclaimtemplate), [TEP-0044 Decoupling Task composition from scheduling](https://github.com/tektoncd/community/blob/main/teps/0044-decouple-task-composition-from-scheduling.md) + catalog tasks | -| PipelineResource specific credential handling | [TEP-0044 Decoupling Task composition from scheduling](https://github.com/tektoncd/community/blob/main/teps/0044-decouple-task-composition-from-scheduling.md) | -| Expressing typed inputs and outputs | [TEP-0075 Dictionary/object params and results](https://github.com/tektoncd/community/pull/479) | -| Reusable parameter bundles | [TEP-0075 Dictionary/object params and results](https://github.com/tektoncd/community/pull/479) | -| Contract around files provided or expected on disk | [TEP-0030 Workspace paths](https://github.com/tektoncd/community/blob/main/teps/0030-workspace-paths.md) | +| PipelineResources Feature | Replacement | +|----------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Augmenting Tasks with steps that execute on the same pod | [TEP-0044 Decoupling Task composition from scheduling](https://github.com/tektoncd/community/blob/main/teps/0044-decouple-task-composition-from-scheduling.md) | +| Automatic storage provisioning | [volumeClaimTemplates](https://github.com/tektoncd/pipeline/blob/main/docs/workspaces.md#volumeclaimtemplate), [TEP-0044 Decoupling Task composition from scheduling](https://github.com/tektoncd/community/blob/main/teps/0044-decouple-task-composition-from-scheduling.md) + catalog tasks | +| PipelineResource specific credential handling | [TEP-0044 Decoupling Task composition from scheduling](https://github.com/tektoncd/community/blob/main/teps/0044-decouple-task-composition-from-scheduling.md) | +| Expressing typed inputs and outputs | [TEP-0075 Dictionary/object params and results](https://github.com/tektoncd/community/pull/479) | +| Reusable parameter bundles | [TEP-0075 Dictionary/object params and results](https://github.com/tektoncd/community/pull/479) | +| Contract around files provided or expected on disk | [TEP-0030 Workspace paths](https://github.com/tektoncd/community/blob/main/teps/0030-workspace-paths.md) | This still leaves the door open for adding a similar abstraction, but in the meantime, we can remove this contentious concept from our API and move forward toward our [v1 Pipelines release](https://github.com/tektoncd/pipeline/issues/3548). @@ -299,7 +299,7 @@ without requiring PipelineResources: proxy information, etc., which is why [the git-clone task has 15 parameters](https://github.com/tektoncd/catalog/blob/main/task/git-clone/0.4/README.md#parameters), but having to duplicate these params each time you use them has led to the - [git-rebase lacking many of these params]([the git-rebase task has 10 parameters](https://github.com/tektoncd/catalog/blob/main/task/git-rebase/0.1/README.md#parameters)) + [git-rebase lacking many of these params](https://github.com/tektoncd/catalog/blob/main/task/git-rebase/0.1/README.md#parameters) * *Replacement*: Supporting objects ([TEP-0075](https://github.com/tektoncd/community/pull/479)) and eventually more complex params and results will allow us to use those to define known interfaces, e.g. a dictionary of values you need when connecting to git. This will still be duplicated each time, but we might still be able to use this feature diff --git a/teps/0075-object-param-and-result-types.md b/teps/0075-object-param-and-result-types.md index f44fb8df2..7775f52b5 100644 --- a/teps/0075-object-param-and-result-types.md +++ b/teps/0075-object-param-and-result-types.md @@ -43,7 +43,7 @@ authors: - [Design Evaluation](#design-evaluation) - [Drawbacks](#drawbacks) - [Alternatives](#alternatives) -- [Alternative #1: Introduce a schema section specifically for the type schema](#alternative-1--introduce-a--section-specifically-for-the-type-schema) +- [Alternative #1: Introduce a schema section specifically for the type schema](#alternative-1-introduce-a-schema-section-specifically-for-the-type-schema) - [Alternative #2: Create a wrapper for JSON Schema](#alternative-2-create-a-wrapper-for-json-schema) - [Alternative #3: Create our own syntax just for dictionaries](#alternative-3-create-our-own-syntax-just-for-dictionaries) - [More alternatives](#more-alternatives) @@ -289,7 +289,7 @@ objects will likely use string values This proposal suggests adding a new `properties` section to param and result definition. If we later support more json schema attribute such as `additionalProperties` and `required`, we'd also support them at the same level as the `properties` field here. ( -See [Alternative #1 adding a schema section](#alternative-1--introduce-a-schema-section-specifically-for-the-type-schema) +See [Alternative #1 adding a schema section](#alternative-1-introduce-a-schema-section-specifically-for-the-type-schema) .) (At that point we should also consider whether we want to adopt strict JSON schema syntax or if we want to support Open API schema instead; see [why JSON Schema](#why-json-schema).) diff --git a/teps/0076-array-result-types.md b/teps/0076-array-result-types.md index da0a2d9b1..681d3d682 100644 --- a/teps/0076-array-result-types.md +++ b/teps/0076-array-result-types.md @@ -172,7 +172,7 @@ This TEP proposes expanding support for arrays in Tasks and Pipelines by adding: 1. **Use case for array indexing: Tasks that wrap CLIs** Array indexing would make it possible for Tasks that wrap generic CLIs to provide - (short - see [size limits](#size-limit)) stdout output as array results which could be indexed by consuming tasks. + (short - see [size limits](#size-limits)) stdout output as array results which could be indexed by consuming tasks. For example [the gcloud CLI task](https://github.com/tektoncd/catalog/tree/main/task/gcloud/0.1): the CLI could be doing all kinds of different things depending on the arguments (e.g. `gcloud projects list` vs `gcloud container clusters list`). In order to consume the output in downstream tasks (with minimal effort), you diff --git a/teps/0079-tekton-catalog-support-tiers.md b/teps/0079-tekton-catalog-support-tiers.md index dff52c923..cef2bdeb1 100644 --- a/teps/0079-tekton-catalog-support-tiers.md +++ b/teps/0079-tekton-catalog-support-tiers.md @@ -533,7 +533,7 @@ of the Catalog, such as by using the `catalog-support-tier` field in Hub configu There's no clear benefit for allowing Contributors to host Catalogs in the tektoncd-catalog GitHub Org. However, it adds more maintenance burden because the Tekton Maintainers would have to create the repositories, and provide general oversight. Contributors are already hosting Catalogs in their own organizations and repositories e.g. -[eBay][ebay] and [Buildpacks][buildpacks]. +[eBay][eBay] and [Buildpacks][buildpacks]. ### Automated Testing and Dogfooding @@ -1071,7 +1071,7 @@ We only automate the signature verification process in the CI in the current des * [Tekton Catalog and Hub Design][catalog-hub-design] * [Pipeline Catalog Integration Proposal][catalog-proposal] * [Original Tekton Catalog Tiers Proposal][catalog-support-tiers] -* [Tekton Catalog Test Infrastructure Design Doc](doc-infra) +* [Tekton Catalog Test Infrastructure Design Doc][doc-infra] * [TEP for Catalog Test Requirements and Infra for Verified+][tep-infra] * [TEP-0003: Tekton Catalog Organization][tep-0003] * [TEP-0091: Verified Remote Resources][tep-0091] diff --git a/teps/0081-add-chains-subcommand-to-the-cli.md b/teps/0081-add-chains-subcommand-to-the-cli.md index 7b7b41c52..ebcfa46f4 100644 --- a/teps/0081-add-chains-subcommand-to-the-cli.md +++ b/teps/0081-add-chains-subcommand-to-the-cli.md @@ -143,7 +143,7 @@ instead of: ## Requirements There may not be any specific requirements, depending on the commands being -implemented. See the [Notes/Caveats (optional)](notes-caveats-optional) for more +implemented. See the [Notes/Caveats (optional)](#notescaveats-optional) for more details. ## Proposal @@ -216,7 +216,7 @@ Why should this TEP _not_ be implemented? The alternative is to use a combination of shell tools and to know which exact annotation or key to query/update. It was ruled out since it complicates the operations for no good reason. See the -[Use Cases (optional)](use-cases-optional) section for some examples. +[Use Cases (optional)](#other-use-case-ideas) section for some examples. One other alternative would be that chains provides a `tkn-chains` binary, and with the "execution model" we have in `tkn`, it would appear as a subcommand.One diff --git a/teps/0082-workspace-hinting.md b/teps/0082-workspace-hinting.md index 2de35e950..05d68ab89 100644 --- a/teps/0082-workspace-hinting.md +++ b/teps/0082-workspace-hinting.md @@ -36,7 +36,7 @@ authors: - [Cons](#cons-2) - [Loosely-Coupled Metadata](#loosely-coupled-metadata) - [Pros](#pros-2) - - [Syntactic Alternatives to workspaces](#syntactic-alternatives-to-) + - [Syntactic Alternatives to workspaces](#syntactic-alternatives-to-workspaces) - [Pros](#pros-3) - [Cons](#cons-3) - [Infrastructure Needed (optional)](#infrastructure-needed-optional) diff --git a/teps/0084-endtoend-provenance-collection.md b/teps/0084-endtoend-provenance-collection.md index df299e8f6..d08383166 100644 --- a/teps/0084-endtoend-provenance-collection.md +++ b/teps/0084-endtoend-provenance-collection.md @@ -560,11 +560,11 @@ information. New configuration options will be introduced to control the behavior for producing and storing pipelinerun attestations. These should behave as similar as possible to their taskrun counterparts. -| Key | Description | Supported Values | Default | -| --- | ----------- | ---------------- | ------- | -| `artifacts.pipelinerun.format` | The format to store TaskRun payloads in. | tekton, in-toto | tekton | -| `artifacts.pipelinerun.storage` | The storage backend to store PipelineRun signatures and attestations in. Multiple backends can be specified with comma-separated list (“tekton,oci”). To disable the PipelineRun artifact input an empty string (""). | tekton, oci, gcs, docdb, grafeas | tekton | -| `artifacts.pipelinerun.signer` | The signature backend to sign Taskrun payloads with. | x509, kms | x509 | +| Key | Description | Supported Values | Default | +|---------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------|---------| +| `artifacts.pipelinerun.format` | The format to store TaskRun payloads in. | tekton, in-toto | tekton | +| `artifacts.pipelinerun.storage` | The storage backend to store PipelineRun signatures and attestations in. Multiple backends can be specified with comma-separated list (“tekton,oci”). To disable the PipelineRun artifact input an empty string (""). | tekton, oci, gcs, docdb, grafeas | tekton | +| `artifacts.pipelinerun.signer` | The signature backend to sign Taskrun payloads with. | x509, kms | x509 | It is expected that storage has an ever changing set of supported values, just like it is for taksruns. There should be parity between the storages supported by both taskrun and pipelinerun diff --git a/teps/0086-changing-the-way-result-parameters-are-stored.md b/teps/0086-changing-the-way-result-parameters-are-stored.md index 71590a58a..6918c13a2 100644 --- a/teps/0086-changing-the-way-result-parameters-are-stored.md +++ b/teps/0086-changing-the-way-result-parameters-are-stored.md @@ -235,7 +235,7 @@ Implementations should take care to ensure the integrity of result/param content - ideally, with tight access controls to prevent tampering and leakage for example, an implementation that stored contents in GCS could use signed URLs to only authorize one POST of object contents, only authorize GETs for only one hour, and delete the object contents entirely after one day. -However, we are currently leaning toward using implementations of [the HTTP service](#dedicated-http-service) as the place +However, we are currently leaning toward using implementations of [the HTTP service](#dedicated-storage-api-service) as the place to plug in support for different backends instead. This alternative provides options for handling a change to result parameter storage and potentially involves adjusting the @@ -279,7 +279,7 @@ We need to consider both performance and security impacts of the changes and tra #### Design Details -This approach for recording results coupled with the [*Dedicated HTTP Service*](#dedicated-http-service) +This approach for recording results coupled with the [*Dedicated HTTP Service*](#dedicated-storage-api-service) (with a well defined interface that can be swapped out) can help abstract the backend. With this approach the default backend could be a ConfigMap (or CRD) - or we could even continue to use the TaskRun itself to store results - since only the HTTP service would need the permissions required to make teh edits (vs a solution where we need to on the fly @@ -395,7 +395,7 @@ Questions: - Should the sidecar be responsible for deciding whether the result should be reported by-value or by-reference? Or is that a controller-wide configuration? - Is passing by-value still useful for small pieces of data to be able to have them inlined in TaskRun/PipelineRun statuses? -### N Configmaps Per TaskRun with Patch Merges (c). +### N Configmaps Per TaskRun with Patch Merges (c). - As the TaskRun Pod proceeds, the injected entrypoint would write result data from `/tekton/results/foo` to the ConfigMap. After a TaskRun completes, the TaskRun controller would read the associated ConfigMap data and copy it into the TaskRun’s status. The ConfigMap is then deleted. - **Create N ConfigMaps** for each of N results, and grant the workload access to write to these results using one of these focused Roles: diff --git a/teps/0088-result-summaries.md b/teps/0088-result-summaries.md index 4fb0bc366..482d53816 100644 --- a/teps/0088-result-summaries.md +++ b/teps/0088-result-summaries.md @@ -181,7 +181,7 @@ demonstrate the interest in a TEP within the wider Tekton community. We want to let UIs/CLIs to show high level summaries of Results, e.g. - | Result | Type | Status | Duration | -| ------ | ----------- | ------- | -------- | +|--------|-------------|---------|----------| | A | PipelineRun | SUCCESS | 30s | | B | TaskRun | FAILURE | 10s | | C | CustomRun | SUCCESS | 5s | diff --git a/teps/0089-nonfalsifiable-provenance-support.md b/teps/0089-nonfalsifiable-provenance-support.md index fa54a5ea4..fd9114cc6 100644 --- a/teps/0089-nonfalsifiable-provenance-support.md +++ b/teps/0089-nonfalsifiable-provenance-support.md @@ -526,11 +526,11 @@ We add the condition type `SignedResultVerified` as a way for the tekton-pipelin For the condition: `SignedResultVerified`, it has the following the behavior: -`status`|`reason`|`completionTime` is set|Description -:-------|:-------|:---------------------:|--------------: -True|TaskRunResultsVerified|Yes|The `TaskRun` results have been verified through validation of its signatures -False|TaskRunResultsVerificationFailed|Yes|The `TaskRun` results' signatures failed to verify -Unknown|AwaitingTaskRunResults|No|Waiting upon `TaskRun` results and signatures to verify +| `status` | `reason` | `completionTime` is set | Description | +|:---------|:---------------------------------|:-----------------------:|------------------------------------------------------------------------------:| +| True | TaskRunResultsVerified | Yes | The `TaskRun` results have been verified through validation of its signatures | +| False | TaskRunResultsVerificationFailed | Yes | The `TaskRun` results' signatures failed to verify | +| Unknown | AwaitingTaskRunResults | No | Waiting upon `TaskRun` results and signatures to verify | ### Storing verification data @@ -677,7 +677,7 @@ Potential mitigations: ## Alternatives -### Kubernertes Service Account Token Volume Projection +### Kubernertes Service Account Token Volume Projection Instead of SPIRE, we could potentially use [Kubernertes Service Account Token Volume Projection](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection) for signing. This is a form of keyless signing, which is described in detail in [Zero-friction “keyless signing” with Kubernetes](https://chainguard.dev/posts/2021-11-03-zero-friction-keyless-signing) by mattmoor@. diff --git a/teps/0090-matrix.md b/teps/0090-matrix.md index da6a7e58c..ab25af777 100644 --- a/teps/0090-matrix.md +++ b/teps/0090-matrix.md @@ -271,7 +271,7 @@ my repository and produces a `Result` that is used to dynamically execute `TaskR Read more in [user experience report #1][kaniko-example-1] and [user experience report #2][kaniko-example-2]. -#### 2. Monorepo Build +#### 2. Monorepo Build As a `Pipeline` author, I have several components (dockerfiles/packages/services) in my repository. @@ -402,7 +402,7 @@ repository and produces a `Result` that is used to dynamically execute the `Task test-code-analysis test-unit-tests e2e-tests ``` -#### 5. Test Sharding +#### 5. Test Sharding As a `Pipeline` author, I have a large test suite that's slow (e.g. browser based tests) and I need to speed it up. I need to split up the test suite into groups, run the tests separately, then combine the results. @@ -526,7 +526,7 @@ GitHub Actions workflows syntax also allows users to: Read more in the [documentation][github-actions]. -#### Jenkins +#### Jenkins Jenkins allows users to define a configuration `matrix` to specify what steps to duplicate. It also allows users to exclude certain combinations in the `matrix` @@ -586,7 +586,7 @@ pipeline { Read more in the [documentation][jenkins-docs] and related [blog][jenkins-blog]. -#### Argo Workflows +#### Argo Workflows Argo Workflows allows users to iterate over: - a list of items as static inputs @@ -718,7 +718,7 @@ If needed, we can also explore providing more granular controls for maximum numb or `Runs` from `Matrices` - either at `PipelineRun`, `Pipeline` or `PipelineTask` levels - later. This is an option we can pursue after gathering user feedback - it's out of scope for this TEP. -## Design +## Design In this section, we go into the details of the `Matrix` in relation to: @@ -1060,7 +1060,7 @@ Producing `Results` from fanned out `PipelineTasks` will not be in the initial i After [TEP-0075: Object Parameters and Results][tep-0075] and [TEP-0076: Array Results][tep-0076] have landed, we will design how to support `Results` from fanned out `PipelineTasks`. -### Execution Status +### Execution Status Today, `PipelineTasks` in the `finally` section can access the execution `Status` - `Succeeded`, `Failed` or `None` - of each `PipelineTask` in the `tasks` section. This @@ -1615,7 +1615,7 @@ However, this approach has the following disadvantages: [tep-0075]: ./0075-object-param-and-result-types.md [tep-0076]: ./0076-array-result-types.md [tep-0079]: ./0079-tekton-catalog-support-tiers.md -[tep-0096]: ./0096-pipelines-v1-api.md +[tep-0096]: ./0096-pipelines-v1.md [tep-0100]: ./0100-embedded-taskruns-and-runs-status-in-pipelineruns.md [task-loops]: https://github.com/tektoncd/experimental/tree/main/task-loops [issue-2050]: https://github.com/tektoncd/pipeline/issues/2050 diff --git a/teps/0091-trusted-resources.md b/teps/0091-trusted-resources.md index 6bb910a6e..40337c267 100644 --- a/teps/0091-trusted-resources.md +++ b/teps/0091-trusted-resources.md @@ -706,11 +706,11 @@ So a `keyref` is needed when signing the pipeline. The signing cli doesn't need ### Options for integrating with remote resolution -| Method | Pros | Cons | -| -------- | ----------- | ----------- | -| Fetch remote resources in validating Webhook | Verification fail fast in webhook | Duplicate work for Resolution, may introduce latency, and one extreme case is that the resource verified may not the the same in reconciler's resolved resource -| Verify in Controller after resolving resources| No duplicate work for Resolution | The verification cannot fail fast in webhook. The resources may have been stored in etcd and used by other components -| Verify in Remote Resolution | No duplicate work for Resolution | Verification coupled with Resolution +| Method | Pros | Cons | +|------------------------------------------------|-----------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Fetch remote resources in validating Webhook | Verification fail fast in webhook | Duplicate work for Resolution, may introduce latency, and one extreme case is that the resource verified may not the the same in reconciler's resolved resource | +| Verify in Controller after resolving resources | No duplicate work for Resolution | The verification cannot fail fast in webhook. The resources may have been stored in etcd and used by other components | +| Verify in Remote Resolution | No duplicate work for Resolution | Verification coupled with Resolution | In this TEP we propose to proceed with option 2 to do verification in reconciler after remote resolution is done considering the latency of doing remote resolution at admission webhook. @@ -735,11 +735,11 @@ Use Kyverno for verifying YAML files: This can be used to verify local resources There are several options: -| Method | Pros | Cons | -| -------- | ----------- | ----------- | -| Update the annotation | Easy to implement | Easy to be mutated by other components -| Add a new field `TrustedResourcesVerified` into `status` | A dedicated field to reflect the verification | Need api change -| Add a new condition into the status condition list | Easy to implement, hard to be mutated |Need a custom condition type and make sure it is not overwritten +| Method | Pros | Cons | +|----------------------------------------------------------|-----------------------------------------------|------------------------------------------------------------------| +| Update the annotation | Easy to implement | Easy to be mutated by other components | +| Add a new field `TrustedResourcesVerified` into `status` | A dedicated field to reflect the verification | Need api change | +| Add a new condition into the status condition list | Easy to implement, hard to be mutated | Need a custom condition type and make sure it is not overwritten | In this TEP we propose to add a new condition into the status condition list. diff --git a/teps/0093-add-sign-verify-subcommand-to-the-cli.md b/teps/0093-add-sign-verify-subcommand-to-the-cli.md index 3e847e428..a8dc82886 100644 --- a/teps/0093-add-sign-verify-subcommand-to-the-cli.md +++ b/teps/0093-add-sign-verify-subcommand-to-the-cli.md @@ -120,11 +120,11 @@ tkn task sign examples/example-task.yaml -K=cosign.key -f=signed.yaml ``` Flags: -| Flag | ShortFlag | Description | -|---|---|---| -| keyFile | K | private key file path | -| kmsKey | m | kms key path | -| targetFile | f | Filename of the signed file | +| Flag | ShortFlag | Description | +|------------|-----------|-----------------------------| +| keyFile | K | private key file path | +| kmsKey | m | kms key path | +| targetFile | f | Filename of the signed file | This will read private and unmarshal yaml files to get Tekton CRD (Task/Pipeline), signing function should sign the hashed bytes of the CRD, attach the base64 encoded signature to annotation with key as `tekton.dev/signature`. @@ -136,10 +136,10 @@ To verify a file: tkn task verify signed.yaml -K=cosign.pub -d=Task ``` -| Flag | ShortFlag | Description | -|---|---|---| -| keyFile | K | public key file path | -| kmsKey | m | kms key path | +| Flag | ShortFlag | Description | +|---------|-----------|----------------------| +| keyFile | K | public key file path | +| kmsKey | m | kms key path | Logs in terminal should tell users whether verification succeed or not. diff --git a/teps/0094-configuring-resources-at-runtime.md b/teps/0094-configuring-resources-at-runtime.md index 866b33886..de77f5e69 100644 --- a/teps/0094-configuring-resources-at-runtime.md +++ b/teps/0094-configuring-resources-at-runtime.md @@ -33,7 +33,7 @@ authors: - [Conformance](#conformance) - [Drawbacks](#drawbacks) - [Alternatives](#alternatives) - - [Allow TaskRuns to patch arbitrary fields of Tasks](#allow-s-to-patch-arbitrary-fields-of-s) + - [Allow TaskRuns to patch arbitrary fields of Tasks](#allow-taskruns-to-patch-arbitrary-fields-of-tasks) - [Syntax Option 1: Overriding via TaskSpec](#syntax-option-1-overriding-via-taskspec) - [Syntax Option 2: JSONPath](#syntax-option-2-jsonpath) - [Allow resource requirements to be parameterized](#allow-resource-requirements-to-be-parameterized) diff --git a/teps/0096-pipelines-v1.md b/teps/0096-pipelines-v1.md index c8ea9f34b..96f92b96a 100644 --- a/teps/0096-pipelines-v1.md +++ b/teps/0096-pipelines-v1.md @@ -93,7 +93,7 @@ replaced by a version that is at least as stable (for example, v1beta1 can be co A deprecated API version must continue to be supported for a length of time based on the following chart. | API Version | Kubernetes Deprecation Policy | Tekton Deprecation Policy | -|:----------- |:----------------------------- |:------------------------- | +|:------------|:------------------------------|:--------------------------| | Alpha | 0 releases | 0 months | | Beta | 9 months or 3 releases | 9 months | @@ -135,7 +135,7 @@ must be supported for 12 months from when a new stable API version is created. B must be accompanied by deprecation warnings and migration instructions from the previous version. | API Version | Kubernetes Deprecation Policy | Tekton Deprecation Policy | -|:----------- |:----------------------------- |:------------------------- | +|:------------|:------------------------------|:--------------------------| | Alpha | 0 releases | 0 months | | Beta | 9 months or 3 releases | 9 months | | V1 | 12 months or 3 releases | 12 months | @@ -250,7 +250,7 @@ This policy should be updated to include Tekton metrics as part of the API. No o See [Where should we take ClusterTasks next?](https://github.com/tektoncd/pipeline/issues/4476) for more info. | CRD | Current level | Proposed level | -| ----------- | ------------- | -------------- | +|-------------|---------------|----------------| | Task | beta | stable | | TaskRun | beta | stable | | Pipeline | beta | stable | @@ -316,15 +316,15 @@ Existing feature flags can be found [here](https://github.com/tektoncd/pipeline/ The feature flag "scope-when-expressions-to-task" is not present in this table, as its deprecation has already [been announced](https://github.com/tektoncd/pipeline/blob/main/docs/deprecations.md#deprecation-table). -| Flag | Current Default | Proposed State | -| --------------------------------------------- | --------------- | -----------------------------------------------------| -| disable-affinity-assistant | false | default to true | -| disable-creds-init | false | no change | -| running-in-environment-with-injected-sidecars | true | no change | -| require-git-ssh-secret-known-hosts | false | no change | -| enable-tekton-oci-bundles | false | mark as deprecated and remove in v1 | -| enable-custom-tasks | false | collapsed under enable-api-fields, requires "beta" | -| enable-api-fields | stable | default to stable, "beta" option added | +| Flag | Current Default | Proposed State | +|-----------------------------------------------|-----------------|----------------------------------------------------| +| disable-affinity-assistant | false | default to true | +| disable-creds-init | false | no change | +| running-in-environment-with-injected-sidecars | true | no change | +| require-git-ssh-secret-known-hosts | false | no change | +| enable-tekton-oci-bundles | false | mark as deprecated and remove in v1 | +| enable-custom-tasks | false | collapsed under enable-api-fields, requires "beta" | +| enable-api-fields | stable | default to stable, "beta" option added | ### Feature Completeness diff --git a/teps/0097-breakpoints-for-taskruns-and-pipelineruns.md b/teps/0097-breakpoints-for-taskruns-and-pipelineruns.md index e614d7c5d..c3c8362e2 100644 --- a/teps/0097-breakpoints-for-taskruns-and-pipelineruns.md +++ b/teps/0097-breakpoints-for-taskruns-and-pipelineruns.md @@ -19,7 +19,7 @@ see-also: - [Requirements](#requirements) - [TaskRun Proposal](#taskrun-proposal) - [Breakpoint before/after a step](#breakpoint-beforeafter-a-step) - - [Breakpoint on failure of a step (already implemented; will be moved to breakpoints.onFailure spec)](#breakpoint-on-failure-of-a-step-already-implemented-will-be-moved-to--spec) + - [Breakpoint on failure of a step (already implemented; will be moved to breakpoints.onFailure spec)](#breakpoint-on-failure-of-a-step-already-implemented-will-be-moved-to-breakpointsonfailure-spec) - [Controlling step lifecycle](#controlling-step-lifecycle) - [Failure of a Step](#failure-of-a-step) - [Halting a Step on failure](#halting-a-step-on-failure) diff --git a/teps/0098-workflows.md b/teps/0098-workflows.md index 16b583d35..13be5a6d6 100644 --- a/teps/0098-workflows.md +++ b/teps/0098-workflows.md @@ -82,7 +82,7 @@ including making it easier to separate notifications and other infrastructure ou * For example, their CI/CD configuration could live in their repo and could be changed just by editing a file. * Ops teams or cluster operators will still need to interact with the cluster, and we should provide a way for app engineers to modify E2E CI/CD configuration directly on the cluster for those who want to. - * See [User Experience](#user-experience) for more information. + * See [User Experience](#user-experience-goals) for more information. ### Future Work @@ -381,7 +381,7 @@ some pain points have since been addressed by new Triggers features: - May be addressed by CustomInterceptors, but more investigation is needed. Building a successful Workflows project may involve addressing some of these gaps in Triggers, and adding new -features such as [scheduled and polling runs](./0083-scheduled-and-polling-runs-in-tekton.md). +features such as [scheduled and polling runs](./0083-polling-runs-in-tekton.md). ## Prior Art @@ -397,7 +397,7 @@ from a connected git repository. Whenever this configuration changes, Flux will automatically bring the state of their Kubernetes cluster in sync with the state of the configuration defined in their repository. - A [POC of Workflows based on FluxCD](https://github.com/tektoncd/experimental/pull/921) found that the `GitRepository` CRD is a close analogue of the repo polling functionality - described in [TEP-0083](./0083-scheduled-and-polling-runs-in-tekton.md), and is + described in [TEP-0083](./0083-polling-runs-in-tekton.md), and is well suited for CD use cases. - The Flux `GitHub` receiver can be used to trigger reconciliation between a repo and a cluster when an event is received from a webhook, but the event body is not @@ -653,7 +653,7 @@ rewriting all of your Tekton Workflows. It's possible that the biggest barriers to adoption and easy setup are that there just aren't enough docs for how to set up Tekton end-to-end and enough catalog Tasks that interact with commonly used platforms. We could also add more features to Triggers, such as polling and scheduled runs, as proposed in -[TEP-0083](./0083-scheduled-and-polling-runs-in-tekton.md), and fix [existing pain points](#use-of-triggers) +[TEP-0083](./0083-polling-runs-in-tekton.md), and fix [existing pain points](#use-of-triggers) that are preventing some platform builders from building on top of Triggers. These improvements are useful even outside the context of Workflows. However, we would still need to address the diff --git a/teps/0100-embedded-taskruns-and-runs-status-in-pipelineruns.md b/teps/0100-embedded-taskruns-and-runs-status-in-pipelineruns.md index 06bf0cf07..fcb2ea630 100644 --- a/teps/0100-embedded-taskruns-and-runs-status-in-pipelineruns.md +++ b/teps/0100-embedded-taskruns-and-runs-status-in-pipelineruns.md @@ -265,7 +265,7 @@ This struct, and `ChildStatusReferences.ConditionChecks`, will be removed once references for the `conditions`' statuses, because `ConditionCheckStatus`, the only thing in `PipelineRunConditionCheckStatus` other than the `ConditionName`, isn't replicated anywhere else, and contains a fairly minimal amount of data - the pod name, start and -completion times, and a `corev1.ContainerState`. See [the issue for deprecating `Conditions`](issue-3377) +completion times, and a `corev1.ContainerState`. See [the issue for deprecating `Conditions`][issue-3377] for more information on the planned removal of `Conditions`. ```go @@ -790,6 +790,7 @@ support this expansion. [tep-0096]: https://github.com/tektoncd/community/blob/main/teps/0096-pipelines-v1-api.md [issue-3140]: https://github.com/tektoncd/pipeline/issues/3140 [issue-3792]: https://github.com/tektoncd/pipeline/issues/3792 +[issue-3377]: https://github.com/tektoncd/pipeline/issues/3377 [issue-82]: https://github.com/tektoncd/results/issues/82 [api-wg]: https://docs.google.com/document/d/17PodAxG8hV351fBhSu7Y_OIPhGTVgj6OJ2lPphYYRpU/edit#heading=h.esbaqjpyouim [pipelinerunstatus]: https://github.com/tektoncd/pipeline/blob/411d033c5e4bf3409f01b175531cbc1a0a75fadb/pkg/apis/pipeline/v1beta1/pipelinerun_types.go#L290-L296 diff --git a/teps/0102-https-connection-to-triggers-interceptor.md b/teps/0102-https-connection-to-triggers-interceptor.md index fc041af25..62a7b307c 100644 --- a/teps/0102-https-connection-to-triggers-interceptor.md +++ b/teps/0102-https-connection-to-triggers-interceptor.md @@ -105,5 +105,5 @@ At high level below are few implementation details - [Add changes to run clusterinterceptor as HTTPS](https://github.com/tektoncd/triggers/pull/1333) -## References +## References 1. GitHub issue: [#871](https://github.com/tektoncd/triggers/issues/871) diff --git a/teps/0104-tasklevel-resource-requirements.md b/teps/0104-tasklevel-resource-requirements.md index 11f4f2ad4..0e49e3823 100644 --- a/teps/0104-tasklevel-resource-requirements.md +++ b/teps/0104-tasklevel-resource-requirements.md @@ -291,7 +291,7 @@ spec: ``` | Step name | CPU request | CPU limit | -| --------- | ----------- | --------- | +|-----------|-------------|-----------| | step-1 | 0.5 | N/A | | step-2 | 0.5 | N/A | | step-3 | 0.5 | N/A | @@ -323,7 +323,7 @@ spec: ``` | Step name | CPU request | CPU limit | -| --------- | ----------- | --------- | +|-----------|-------------|-----------| | step-1 | 1 | 3 | | step-2 | 1 | 3 | | step-3 | 1 | 3 | @@ -358,7 +358,7 @@ spec: ``` | Step name | CPU request | CPU limit | -| --------- | ----------- | --------- | +|-----------|-------------|-----------| | step-1 | 0.5 | 2 | | step-2 | 0.5 | 2 | | step-3 | 0.5 | 2 | @@ -400,7 +400,7 @@ spec: The resulting pod would have the following containers: | Step/Sidecar name | CPU request | CPU limit | -| ----------------- | ----------- | --------- | +|-------------------|-------------|-----------| | step-1 | 750m | N/A | | step-2 | 750m | N/A | | sidecar-1 | 800m | 1 | @@ -449,7 +449,7 @@ spec: The resulting pod would have the following containers: | Step name | CPU request | CPU limit | -| --------- | ----------- | --------- | +|-----------|-------------|-----------| | step-1 | 500m | 750m | | step-2 | 500m | 750m | | step-3 | 500m | 750m | @@ -501,7 +501,7 @@ spec: The resulting pod would have the following containers: | Step name | CPU request | CPU limit | -| --------- | ----------- | --------- | +|-----------|-------------|-----------| | step-1 | 600m | 750m | | step-2 | 600m | 750m | | step-3 | 600m | 750m | @@ -541,7 +541,7 @@ spec: The resulting pod would have the following containers: | Step name | CPU request | CPU limit | -| --------- | ----------- | --------- | +|-----------|-------------|-----------| | step-1 | 750m | N/A | | step-2 | 750m | N/A | @@ -580,7 +580,7 @@ spec: The resulting pod would have the following containers: | Step name | CPU request | CPU limit | -| --------- | ----------- | --------- | +|-----------|-------------|-----------| | step-1 | 1 | N/A | | step-2 | 1 | N/A | @@ -649,7 +649,7 @@ spec: The resulting pod would have the following containers: | Step name | CPU request | CPU limit | Memory request | Memory limit | -| --------- | ----------- | --------- | -------------- | ------------ | +|-----------|-------------|-----------|----------------|--------------| | step-1 | 750m | N/A | 250Mi | 1Gi | | step-2 | 750m | N/A | 250Mi | 1Gi | diff --git a/teps/0105-remove-pipeline-v1alpha1-api.md b/teps/0105-remove-pipeline-v1alpha1-api.md index 13d8a4967..aa25ce4f4 100644 --- a/teps/0105-remove-pipeline-v1alpha1-api.md +++ b/teps/0105-remove-pipeline-v1alpha1-api.md @@ -21,7 +21,7 @@ authors: ## Summary This TEP proposes a clear schedule for the removal of Pipeline's deprecated -`v1alpha1` API. As we [move towards the stable V1 API](./0096-pipelines-v1-api.md), +`v1alpha1` API. As we [move towards the stable V1 API](./0096-pipelines-v1.md), removing the long-deprecated `v1alpha1` API will help clarify migration paths for users and simplify implementation of the `v1` API. diff --git a/teps/0106-support-specifying-metadata-per-task-in-runtime.md b/teps/0106-support-specifying-metadata-per-task-in-runtime.md index 99a736676..a548fc104 100644 --- a/teps/0106-support-specifying-metadata-per-task-in-runtime.md +++ b/teps/0106-support-specifying-metadata-per-task-in-runtime.md @@ -29,8 +29,8 @@ authors: - [Simplicity](#simplicity) - [Drawbacks](#drawbacks) - [Alternatives](#alternatives) - - [Add Metadata under TaskRef in Pipeline](#add-metadata-under--in-) - - [Create a PipelineTaskRef type](#create-a--type) + - [Add Metadata under TaskRef in Pipeline](#add-metadata-under-taskref-in-pipeline) + - [Create a PipelineTaskRef type](#create-a-pipelinetaskref-type) - [Utilize Parameter Substitutions](#utilize-parameter-substitutions) - [Test Plan](#test-plan) - [Implementation Pull Requests](#implementation-pull-requests) diff --git a/teps/0111-propagating-workspaces.md b/teps/0111-propagating-workspaces.md index c8a871278..cc80d5b0b 100644 --- a/teps/0111-propagating-workspaces.md +++ b/teps/0111-propagating-workspaces.md @@ -626,7 +626,7 @@ status: ## Alternatives -### Propagate all workspaces to all Pipeline Tasks +### Propagate all workspaces to all Pipeline Tasks Propagating all `Workspaces` defined at `PipelineRun` down to all the `PipelineTasks` regardless of whether they are used by that `PipelineTask`. However, a workspace may have sensitive data that we don’t want to be accessible to all tasks. This approach is rejected because we only want data available where it is needed. We could remove the unwanted workspaces just before creating the task pod but this method will in turn also propagate workspaces for referenced parameters which we want to avoid because the behavior becomes opaque when users can't see the relationship between Workspaces declared in the referenced resources and the Workspaces supplied in runtime resources. diff --git a/teps/0112-replace-volumes-with-workspaces.md b/teps/0112-replace-volumes-with-workspaces.md index f398de16c..6f6796f86 100644 --- a/teps/0112-replace-volumes-with-workspaces.md +++ b/teps/0112-replace-volumes-with-workspaces.md @@ -456,7 +456,7 @@ Currently, users may mount any type of volume supported by Kubernetes into their When considering replacing volumes with Workspaces, we should reconsider what types of volumes are available as Workspace bindings. There are several options available: -- [Support all types of volumes as workspace bindings](#support-all-types-of-volumes-in-workspace-binding). +- [Support all types of volumes as workspace bindings](#support-all-types-of-volumes-in-workspacebinding). One downside is that there's no clear use case for many of these volume types in the Tekton API. - Support for [hostPath volumes](#support-hostpath-volumes-in-workspace-bindings). This helps with advanced docker-in-docker use cases, but isn't necessary for most docker-in-docker use cases and comes with some security concerns. diff --git a/teps/0114-custom-tasks-beta.md b/teps/0114-custom-tasks-beta.md index 0f1c57639..19dfc5e7b 100644 --- a/teps/0114-custom-tasks-beta.md +++ b/teps/0114-custom-tasks-beta.md @@ -29,7 +29,7 @@ see-also: - [References and Specifications](#references-and-specifications) - [Remove Pod Template](#remove-pod-template) - [Feature Gates](#feature-gates) - - [New Feature Flag custom-task-version](#new-feature-flag-) + - [New Feature Flag custom-task-version](#new-feature-flag-custom-task-version) - [Existing Feature Gates](#existing-feature-gates) - [Cancellation](#cancellation) - [Documentation](#documentation) @@ -358,7 +358,7 @@ For further details, see [tektoncd/community#523][523], [tektoncd/community#667] [tep-0002]: 0002-custom-tasks.md [tep-0105]: 0105-remove-pipeline-v1alpha1-api.md -[tep-0096]: 0096-pipelines-v1-api.md +[tep-0096]: 0096-pipelines-v1.md [tep-0071]: 0071-custom-task-sdk.md [tep-0061]: 0061-allow-custom-task-to-be-embedded-in-pipeline.md [tep-0069]: 0069-support-retries-for-custom-task-in-a-pipeline.md @@ -375,7 +375,7 @@ For further details, see [tektoncd/community#523][523], [tektoncd/community#667] [pipelines-in-pipelines]: https://github.com/tektoncd/experimental/tree/f60e1cd8ce22ed745e335f6f547bb9a44580dc7c/pipelines-in-pipelines [pipeline-in-pod]: https://github.com/tektoncd/experimental/tree/f60e1cd8ce22ed745e335f6f547bb9a44580dc7c/pipeline-in-pod [config]: https://github.com/tektoncd/hub/blob/0ba02511db7a06aef54e2257bf2540be85b53f45/config.yaml -[hub-cli0]: https://github.com/tektoncd/cli/blob/d826e7a2a17a5f3d3f5b3fe8ac9cff95856627d7/docs/cmd/tkn_hub.md +[hub-cli]: https://github.com/tektoncd/cli/blob/d826e7a2a17a5f3d3f5b3fe8ac9cff95856627d7/docs/cmd/tkn_hub.md [4313]: https://github.com/tektoncd/pipeline/issues/4313 [523]: https://github.com/tektoncd/community/pull/523 [667]: https://github.com/tektoncd/community/discussions/667 diff --git a/teps/0115-tekton-catalog-git-based-versioning.md b/teps/0115-tekton-catalog-git-based-versioning.md index a369e4028..a098bb916 100644 --- a/teps/0115-tekton-catalog-git-based-versioning.md +++ b/teps/0115-tekton-catalog-git-based-versioning.md @@ -335,7 +335,7 @@ resources into separate Catalogs where each will have its own versioning. ##### eBay - Catalog with one Task -[eBay][ebay] shared a repository with a single `Task` for sending Slack messages. +[eBay][eBay] shared a repository with a single `Task` for sending Slack messages. The eBay team could modify the repository to meet the organization contract in this TEP - this would make it a Catalog. @@ -746,7 +746,7 @@ approach of tagging new releases in the Catalogs as is done in GitHub Actions. We could provide guidelines and recommendations for creating Catalogs e.g. deciding whether to group resources together instead of splitting them into separate Catalogs. We can explore this in future work. -### Catlin +### Catlin Today, [catlin bump](https://github.com/tektoncd/catlin#bump) command only supports directory-based versioning catalog. We could further extend the [catlin bump](https://github.com/tektoncd/catlin#bump) command to support git-based versioning catalogs (where the command will create a new git tag and a corresponding release note based on the current latest version). diff --git a/teps/0120-canceling-concurrent-pipelineruns.md b/teps/0120-canceling-concurrent-pipelineruns.md index 0a0532be0..3f7138dba 100644 --- a/teps/0120-canceling-concurrent-pipelineruns.md +++ b/teps/0120-canceling-concurrent-pipelineruns.md @@ -54,7 +54,7 @@ If a new deployment PipelineRun starts while a previous one is still running, th #### Out of scope -- Queueing concurrent PipelineRuns; this is covered in [TEP-0132](./0132-queueing-concurrent-pipelineruns.md). +- Queueing concurrent PipelineRuns; this is covered in [TEP-0132](./0132-queueing-concurrent-runs.md). ### Requirements diff --git a/teps/0121-refine-retries-for-taskruns-and-customruns.md b/teps/0121-refine-retries-for-taskruns-and-customruns.md index 2845f4b9e..a93c3a1ed 100644 --- a/teps/0121-refine-retries-for-taskruns-and-customruns.md +++ b/teps/0121-refine-retries-for-taskruns-and-customruns.md @@ -25,17 +25,17 @@ see-also: - [Design Details](#design-details) - [Timeout per Retry](#timeout-per-retry) - [Retries in TaskRuns and CustomRuns](#retries-in-taskruns-and-customruns) - - [Execution Status of a failed pipelineTask with retries](#execution-status-of-a---with-) + - [Execution Status of a failed pipelineTask with retries](#execution-status-of-a-failed-pipelinetask-with-retries) - [Conditions.Succeeded](#conditionssucceeded) - [RetriesStatus](#retriesstatus) - [Future Work](#future-work) - [Alternatives](#alternatives) - - [1. Implement retries in PipelineRun](#1-implement--in-pipelinerun) - - [2. Implement retries in TaskRun/Run, use retryAttempts instead of retriesStatus](#2-implement--in-taskrunrun-use--instead-of-) + - [1. Implement retries in PipelineRun](#1-implement-retries-in-pipelinerun) + - [2. Implement retries in TaskRun/Run, use retryAttempts instead of retriesStatus](#2-implement-retries-in-taskrunrun-use-retryattempts-instead-of-retriesstatus) - [Two API Changes](#two-api-changes) - [Two New Labels](#two-new-labels) - - [How the Retries Works](#how-the--works) - - [3. Conditions.RetrySucceeded](#3-) + - [How the Retries Works](#how-the-retries-works) + - [3. Conditions.RetrySucceeded](#3-conditionsretrysucceeded) - [References](#references) @@ -115,12 +115,12 @@ Typically, a retry strategy includes: 4. Timeout of each attempt 5. Retry until a certain condition is met -| | [Retry Action in GA](https://github.com/marketplace/actions/retry-action) | [GitLab Job](https://docs.gitlab.com/ee/ci/yaml/#retry) | [Ansible Task](https://docs.ansible.com/ansible/latest/user_guide/playbooks_loops.html#retrying-a-task-until-a-condition-is-met)| [Concourse Step](https://concourse-ci.org/attempts-step.html#attempts-step) | -|:---|:---|:---|:---|:---| -| **When to Retry** | on failure |configurable|[always retry, conditional stop](https://github.com/ansible/ansible/pull/76101) [^ansible-conditional-stop]|configurable| -| **Attempts amount** |supported|supported|supported|supported| -| **Timeout for each attempt** |supported|[supported](https://docs.gitlab.com/ee/ci/yaml/#retrywhen)|supported|supported| -| **Timeout for all attempts** |[supported](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepstimeout-minutes)|-|-|-| +| | [Retry Action in GA](https://github.com/marketplace/actions/retry-action) | [GitLab Job](https://docs.gitlab.com/ee/ci/yaml/#retry) | [Ansible Task](https://docs.ansible.com/ansible/latest/user_guide/playbooks_loops.html#retrying-a-task-until-a-condition-is-met) | [Concourse Step](https://concourse-ci.org/attempts-step.html#attempts-step) | +|:-----------------------------|:----------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------| +| **When to Retry** | on failure | configurable | [always retry, conditional stop](https://github.com/ansible/ansible/pull/76101) [^ansible-conditional-stop] | configurable | +| **Attempts amount** | supported | supported | supported | supported | +| **Timeout for each attempt** | supported | [supported](https://docs.gitlab.com/ee/ci/yaml/#retrywhen) | supported | supported | +| **Timeout for all attempts** | [supported](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepstimeout-minutes) | - | - | - | Several observations regarding to the feature table above: @@ -148,16 +148,16 @@ The `PipelineRun` controller does not check for `len(retriesStatus)` to determin The following table shows how an overall status of a `taskRun` or a `customRun` for a `pipelineTask` with `retries` set to 3: -| `status` | `reason` |`completionTime` is set | description | -|----------|-------------|------------------------|------------------------------------------------------------------------------------------------------------| -| Unknown | Running | No | The `taskRun` has been validated and started to perform its work. | -| Unknown | ToBeRetried | No | The `taskRun` (zero attempt of a `pipelineTask`) finished executing but failed. It has 3 retries pending. | -| Unknown | Running | No | First attempt of a `taskRun` has started to perform its work. | -| Unknown | ToBeRetried | No | The `taskRun` (first attempt of a `pipelineTask`) finished executing but failed. It has 2 retries pending. | -| Unknown | Running | No | Second attempt of a `taskRun` has started to perform its work. | -| Unknown | ToBeRetried | No | The `taskRun` (second attempt of a `pipelineTask`) finished executing but failed. It has 1 retry pending. | -| Unknown | Running | No | Third attempt of a `taskRun` has started to perform its work. | -| False | Failed | Yes | The `taskRun` (third attempt of a `pipelineTask`) finished executing but failed. No more retries pending. | +| `status` | `reason` | `completionTime` is set | description | +|----------|-------------|-------------------------|------------------------------------------------------------------------------------------------------------| +| Unknown | Running | No | The `taskRun` has been validated and started to perform its work. | +| Unknown | ToBeRetried | No | The `taskRun` (zero attempt of a `pipelineTask`) finished executing but failed. It has 3 retries pending. | +| Unknown | Running | No | First attempt of a `taskRun` has started to perform its work. | +| Unknown | ToBeRetried | No | The `taskRun` (first attempt of a `pipelineTask`) finished executing but failed. It has 2 retries pending. | +| Unknown | Running | No | Second attempt of a `taskRun` has started to perform its work. | +| Unknown | ToBeRetried | No | The `taskRun` (second attempt of a `pipelineTask`) finished executing but failed. It has 1 retry pending. | +| Unknown | Running | No | Third attempt of a `taskRun` has started to perform its work. | +| False | Failed | Yes | The `taskRun` (third attempt of a `pipelineTask`) finished executing but failed. No more retries pending. | Pipeline controller can now rely on `ConditionSucceeded` set to `Failed` after all the retries are exhausted. diff --git a/teps/0122-complete-build-instructions-and-parameters.md b/teps/0122-complete-build-instructions-and-parameters.md index 00b0fd1b7..cf497c115 100644 --- a/teps/0122-complete-build-instructions-and-parameters.md +++ b/teps/0122-complete-build-instructions-and-parameters.md @@ -114,47 +114,47 @@ This will allow us to create a provenance for a task run as [shown](#provenance- The table below provides the name and description of the API fields that the Task requires. In addition, it also lists if that information is required, not required, or provided by the provenance. If it is required, it also shows where the information should be provided. Looking at the table below and required [step](#step-specification) information, there are only a handful of fields that are not required for reproducibility. Instead of cherry-picking most of the fields, including the entire taskSpec in the buildConfig section of the provenance is likely the best way to go forward. Additionally, if we update the spec in the future, then chains will not have to be updated correspondingly. The work described in [issue](https://github.com/tektoncd/chains/issues/597) is meant to handle this. -| | **Metadata** | | | -| ------ | ------------- | ----- | ----- | -| **Field Name** | **Description** | **Not Required / Required / Provided by provenance** | **Insert in provenance** | -| name | Name of the task | Required | | -| Spec | | | | -| resources | resources used by steps | Not Required (Deprecated) | | -| description | description of the task | Not Required | | -| params | parameters required by task | Provided | | -| results | results produced by the task | Required(Atleast name, type and properties (in case of object)) | buildConfig | -| volumes | volume mounted on the container | Required | buildConfig | -| workspaces | workspace bindings used by the task | Required | buildConfig | -| steps | Steps performed by the task | Provided | | -| sidecars | Sidecars running alongside tasks | Required | buildConfig | -| step template | Specifies a container configuration that will be used as the starting point for all of the Steps in your Task | Required | buildConfig | +| | **Metadata** | | | +|----------------|---------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------|--------------------------| +| **Field Name** | **Description** | **Not Required / Required / Provided by provenance** | **Insert in provenance** | +| name | Name of the task | Required | | +| Spec | | | | +| resources | resources used by steps | Not Required (Deprecated) | | +| description | description of the task | Not Required | | +| params | parameters required by task | Provided | | +| results | results produced by the task | Required(Atleast name, type and properties (in case of object)) | buildConfig | +| volumes | volume mounted on the container | Required | buildConfig | +| workspaces | workspace bindings used by the task | Required | buildConfig | +| steps | Steps performed by the task | Provided | | +| sidecars | Sidecars running alongside tasks | Required | buildConfig | +| step template | Specifies a container configuration that will be used as the starting point for all of the Steps in your Task | Required | buildConfig | ### Step Specification The table below provides the name and description of the API fields that the Step requires. In addition, it also lists if that information is required, not required, or provided by the provenance. If it is required, it also shows where the information should be provided. **Deprecated fields have not been included in the table below.** Like in [Task Specification](#task-specification), almost all fields are either required or provided in the provenance. Therefore for the same reasons, we should include the entire spec in the generated provenance. -| | **Metadata** | | | -| ------ | ------------- | ----- | ----- | -| **Field Name** | **Description** | **Not Required / Required / Provided by provenance** | **Insert in provenance** | -| name | Name of the step | Not Required | | -| image | Image name for this step | Provided | | -| command | Entrypoint array | Provided | | -| args | Arguments to the entrypoint | Provided | | -| working Dir | Step’s working directory | Required | buildConfig | -| EnvFrom | List of sources to populate env variables | Required | buildConfig | -| Env | List of env variables to set in the container | Required | buildConfig | -| Resources | Compute resources required by this step | Required | buildConfig | -| VolumeMounts | Volumes to mount in the step’s filesystem | Required | buildConfig | -| VolumeDevices | List of block devices to be used by the step | Required | buildConfig | -| ImagePullPolicy | Image Pull Policy | Required | buildConfig | -| Security Context | security options the step should run with | Required | buildConfig | -| Script | Contents of an executable file to execute | Provided | | -| Timeout | Time after which the step times out | Required | buildConfig | -| Workspaces | list of workspaces from the task that the step wants exclusive access to | Required | buildConfig | -| OnError | behavior of the container on error | Required | buildConfig | -| StdOut Config | config of the stdout stream | Required | buildConfig | -| StdErr Config | config of the stderr stream | Required | buildConfig | +| | **Metadata** | | | +|------------------|--------------------------------------------------------------------------|------------------------------------------------------|--------------------------| +| **Field Name** | **Description** | **Not Required / Required / Provided by provenance** | **Insert in provenance** | +| name | Name of the step | Not Required | | +| image | Image name for this step | Provided | | +| command | Entrypoint array | Provided | | +| args | Arguments to the entrypoint | Provided | | +| working Dir | Step’s working directory | Required | buildConfig | +| EnvFrom | List of sources to populate env variables | Required | buildConfig | +| Env | List of env variables to set in the container | Required | buildConfig | +| Resources | Compute resources required by this step | Required | buildConfig | +| VolumeMounts | Volumes to mount in the step’s filesystem | Required | buildConfig | +| VolumeDevices | List of block devices to be used by the step | Required | buildConfig | +| ImagePullPolicy | Image Pull Policy | Required | buildConfig | +| Security Context | security options the step should run with | Required | buildConfig | +| Script | Contents of an executable file to execute | Provided | | +| Timeout | Time after which the step times out | Required | buildConfig | +| Workspaces | list of workspaces from the task that the step wants exclusive access to | Required | buildConfig | +| OnError | behavior of the container on error | Required | buildConfig | +| StdOut Config | config of the stdout stream | Required | buildConfig | +| StdErr Config | config of the stderr stream | Required | buildConfig | ### TaskRef @@ -165,24 +165,24 @@ TaskRef is useful for references to existing tasks on the cluster or remote task The table below provides the name and description of the API fields that the TaskRun requires. In addition, it also lists if that information is required, not required, or provided by the provenance. If it is required, it also shows where the information should be provided. Note that since task run provides information during runtime, most of the information is contained in the **invocation.Parameters** section of the [provenance](#provenance-for-executed-taskrun). -| | **Metadata** | | | -| ------ | ------------- | ----- | ----- | -| **Field Name** | **Description** | **Not Required / Required / Provided by provenance** | **Insert in provenance** | -| name| Name of the task run| Not required for completeness | | -| | **Spec** | | | -| resources | resources used by task | Deprecated but required if using older tekton version | invocation.Parameters | -| service account name | name of the service account | Required | invocation.Parameters | -| params | parameter values provided by taskrun | Provided | invocation.Parameters | -| workspaces | workspaces used by the task | Required | invocation.Parameters | -| pod template | | Required | invocation.Parameters | -| Timeout | time in which a task should complete | Required for completeness | invocation.Parameters | -| StepOverrides | Override Step configuration specified in a Task. Currently we can override compute resources. | Required for completeness | invocation.Parameters | -| SidecarOverrides | Override Sidecar configuration specified in a Task. Currently we can override compute resources. | Required for completeness | invocation.Parameters | -| ComputeResources | Configure compute resources required by the steps in the task. | Required (because this could also indicate a minimum amount of resources required for the task run.) | invocation.Parameters | -| [TaskSpec](#task-specification) | Specification of the resolved task | See [spec](#task-specification) | buildConfig | -| [TaskRef](#taskref) | Details of the referenced task | Not Required (Since we save the complete TaskSpec even that of a remote task) | | -| | **Status** | | | -| Task Results | Results produced by the task run | Not Required since this is an output | | +| | **Metadata** | | | +|---------------------------------|--------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------|--------------------------| +| **Field Name** | **Description** | **Not Required / Required / Provided by provenance** | **Insert in provenance** | +| name | Name of the task run | Not required for completeness | | +| | **Spec** | | | +| resources | resources used by task | Deprecated but required if using older tekton version | invocation.Parameters | +| service account name | name of the service account | Required | invocation.Parameters | +| params | parameter values provided by taskrun | Provided | invocation.Parameters | +| workspaces | workspaces used by the task | Required | invocation.Parameters | +| pod template | | Required | invocation.Parameters | +| Timeout | time in which a task should complete | Required for completeness | invocation.Parameters | +| StepOverrides | Override Step configuration specified in a Task. Currently we can override compute resources. | Required for completeness | invocation.Parameters | +| SidecarOverrides | Override Sidecar configuration specified in a Task. Currently we can override compute resources. | Required for completeness | invocation.Parameters | +| ComputeResources | Configure compute resources required by the steps in the task. | Required (because this could also indicate a minimum amount of resources required for the task run.) | invocation.Parameters | +| [TaskSpec](#task-specification) | Specification of the resolved task | See [spec](#task-specification) | buildConfig | +| [TaskRef](#taskref) | Details of the referenced task | Not Required (Since we save the complete TaskSpec even that of a remote task) | | +| | **Status** | | | +| Task Results | Results produced by the task run | Not Required since this is an output | | ### Configuration Feature Flags diff --git a/teps/0125-add-credential-filter-to-entrypoint-logger.md b/teps/0125-add-credential-filter-to-entrypoint-logger.md index d79158a2c..1fa199cf3 100644 --- a/teps/0125-add-credential-filter-to-entrypoint-logger.md +++ b/teps/0125-add-credential-filter-to-entrypoint-logger.md @@ -154,7 +154,7 @@ env: The secret key reference is the trigger to detect an environment variable that contains a secret and that needs to be redacted. -### Secrets mounted as Files +### Secrets mounted as Files Secrets can also be mounted into pods via files in different ways. The following pod syntax should be supported in the secret detection logic. diff --git a/teps/0128-scheduled-runs.md b/teps/0128-scheduled-runs.md index 7405bd7d8..9d5f85fc4 100644 --- a/teps/0128-scheduled-runs.md +++ b/teps/0128-scheduled-runs.md @@ -31,7 +31,7 @@ collaborators: [] - [Security](#security) - [Drawbacks](#drawbacks) - [Alternatives](#alternatives) - - [ScheduledTrigger CRD with fixed event body](#crontrigger-crd-with-fixed-event-body) + - [ScheduledTrigger CRD with fixed event body](#scheduledtrigger-crd-with-fixed-event-body) - [Add schedule to Trigger](#add-schedule-to-trigger) - [Add schedule to EventListener](#add-schedule-to-eventlistener) - [Create a PingSource](#create-a-pingsource) diff --git a/teps/0130-pipeline-level-service.md b/teps/0130-pipeline-level-service.md index 63df93476..aad5ae244 100644 --- a/teps/0130-pipeline-level-service.md +++ b/teps/0130-pipeline-level-service.md @@ -17,7 +17,6 @@ collaborators: [] - [Docker builds](#docker-builds) - [Other uses for this feature](#other-uses-for-this-feature) - [Goals](#goals) - - [Non-Goals](#non-goals) - [Requirements](#requirements) - [Related features in other CI/CD systems](#related-features-in-other-cicd-systems) - [User Experience](#user-experience) diff --git a/teps/0131-tekton-conformance-policy.md b/teps/0131-tekton-conformance-policy.md index 4b56f18d1..32834528f 100644 --- a/teps/0131-tekton-conformance-policy.md +++ b/teps/0131-tekton-conformance-policy.md @@ -24,7 +24,7 @@ collaborators: ['@dibyom@', '@vdemeester@'] - [Conformance Test Suites](#conformance-test-suites) - [Q&A](#qa) - [1. Do field requirements mean the same for Tekton users and vendors?](#1-do-field-requirements-mean-the-same-for-tekton-users-and-vendors) - - [2 Tekton Conformance Policy (this doc) v.s. Tekton API Compatibility Policy.](#2-tekton-conformance-policy-this-doc-vs-tekton-api-compatibility-policy) + - [2 Tekton Conformance Policy (this doc) v.s. Tekton API Compatibility Policy.](#2-tekton-conformance-policy-this-doc-vs-tekton-api-compatibility-policytekton-api-compatibility-policy) - [3. Do we only include GA primitives in the conformance policy?](#3-do-we-only-include-ga-primitives-in-the-conformance-policy) - [4. Should we define the Conformance Policy per Tekton Services?](#4-should-we-define-the-conformance-policy-per-tekton-services) - [Open Questions](#open-questions) @@ -146,7 +146,7 @@ Worth noting that even if we bump up the policy version, vendors can still claim ### Policy Update Procedure * **Open a PR** to propose the update. -* **The PR** must be approved by more than half of [the project OWNERS][project owners] (i.e. 50% + 1) when it involves actual requirements changes (as opposed to typo/grammar fixing). +* **The PR** must be approved by more than half of [the project OWNERS][Project OWNERs] (i.e. 50% + 1) when it involves actual requirements changes (as opposed to typo/grammar fixing). * **Update Notice** ahead of time. * Nice to have: conformance test kit can pop up the update notice when vendors run it. * How long do we want the update notice to be ahead of is an open question. @@ -155,7 +155,7 @@ Worth noting that even if we bump up the policy version, vendors can still claim #### Conformance Test Suites -We can follow "[duck test][duck test]", we provide inputs (YAMLs including all required fields) and outputs (status, results etc.) to see if vendors meet the requirements (vendors will need to convert the YAMLs to the format they support). Vendors are required to open a PR containing an instruction doc for reproducing the test result, and the community reviews and approves the PR (Borrowed the idea from [Kubernetes Conformance Test][Kubernetes Conformance Test]). This is a one-time test, but the test result MUST be reproducible while the Conformance claim is valid. +We can follow "[duck test][Duck Test]", we provide inputs (YAMLs including all required fields) and outputs (status, results etc.) to see if vendors meet the requirements (vendors will need to convert the YAMLs to the format they support). Vendors are required to open a PR containing an instruction doc for reproducing the test result, and the community reviews and approves the PR (Borrowed the idea from [Kubernetes Conformance Test][Kubernetes Conformance Test]). This is a one-time test, but the test result MUST be reproducible while the Conformance claim is valid. ### Q&A #### 1. Do field requirements mean the same for Tekton users and vendors? diff --git a/teps/0136-capture-traces-for-task-pod-events.md b/teps/0136-capture-traces-for-task-pod-events.md index a69db1751..9ff2b0b58 100644 --- a/teps/0136-capture-traces-for-task-pod-events.md +++ b/teps/0136-capture-traces-for-task-pod-events.md @@ -29,7 +29,6 @@ collaborators: [] - [Drawbacks](#drawbacks) - [Implementation Plan](#implementation-plan) - [Test Plan](#test-plan) - - [Upgrade and Migration Strategy](#upgrade-and-migration-strategy) - [Implementation Pull Requests](#implementation-pull-requests) - [References](#references) diff --git a/teps/0137-cloudevents-controller.md b/teps/0137-cloudevents-controller.md index cfa35f84d..24566b28c 100644 --- a/teps/0137-cloudevents-controller.md +++ b/teps/0137-cloudevents-controller.md @@ -151,15 +151,15 @@ The external controllers may only rely on the information available in the statu and the list of events in the cache, so the events sent will have to be slightly adapted accordingly: -| Resource | Condition | Reason | Current Event | New Event | CDEvent | Notes | -|---------------|-----------|-----------|----------------|-----------|----------|-------| -| Any | None | None | None | `queued` | `queued` | When the resource has no status, we know that it has been queued, but we don't know when the main controller will pick it up| -| Any | `Unknown` | `Started` | `started` | `started` | `started` | The `Started` reason will only be visible by the controller is the main controller was not able to start running its resources in the same reconcile cycle | -| `TaskRun`, `PipelineRun` | `Unknown` | `Running` | `running` | `started`, `running` or None | `started` or None | If the `started` event was not sent yet, it is sent. If the `running` event was not sent yet, it is sent. If both events were sent already, the `running` event is sent only if there was a change in the `Condition` compared to the last `running` event sent | -| `TaskRun`, `PipelineRun` | `Unknown` | Any but `Running` | `unknown` | `started`, `unknown` or None | `started` or None | If the `started` event was not sent yet, it is sent. If the `unknown` event was not sent yet, it is sent. If both events were sent already, the `unknown` event is sent only if there was a change in the `Condition` compared to the last `unknown` event sent | -| `CustomRun` | `Unknown` | Any | `running` | `started`, `running` | `started` or None | If the `started` event was not sent yet, it is sent. We cannot make assumptions about how `Reason` and `Message` are used by the custom run controller, so always send a `running` event | -| Any | `Succeed` | Any | `successful` | `successful` | `finished` | The `finished` CDEvent include the `Condition` in the `outcome` field | -| Any | `Failed` | Any | `failed` | `failed` | `finished` | The `finished` CDEvent include the `Condition` in the `outcome` field | +| Resource | Condition | Reason | Current Event | New Event | CDEvent | Notes | +|--------------------------|-----------|-------------------|---------------|------------------------------|-------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Any | None | None | None | `queued` | `queued` | When the resource has no status, we know that it has been queued, but we don't know when the main controller will pick it up | +| Any | `Unknown` | `Started` | `started` | `started` | `started` | The `Started` reason will only be visible by the controller is the main controller was not able to start running its resources in the same reconcile cycle | +| `TaskRun`, `PipelineRun` | `Unknown` | `Running` | `running` | `started`, `running` or None | `started` or None | If the `started` event was not sent yet, it is sent. If the `running` event was not sent yet, it is sent. If both events were sent already, the `running` event is sent only if there was a change in the `Condition` compared to the last `running` event sent | +| `TaskRun`, `PipelineRun` | `Unknown` | Any but `Running` | `unknown` | `started`, `unknown` or None | `started` or None | If the `started` event was not sent yet, it is sent. If the `unknown` event was not sent yet, it is sent. If both events were sent already, the `unknown` event is sent only if there was a change in the `Condition` compared to the last `unknown` event sent | +| `CustomRun` | `Unknown` | Any | `running` | `started`, `running` | `started` or None | If the `started` event was not sent yet, it is sent. We cannot make assumptions about how `Reason` and `Message` are used by the custom run controller, so always send a `running` event | +| Any | `Succeed` | Any | `successful` | `successful` | `finished` | The `finished` CDEvent include the `Condition` in the `outcome` field | +| Any | `Failed` | Any | `failed` | `failed` | `finished` | The `finished` CDEvent include the `Condition` in the `outcome` field | Since the logic for sending events depends on the content of the cache, it's important for the cache to be updated **in order**. @@ -505,14 +505,14 @@ interface to let users specify the data required and let Tekton send the events. Alternatively, events can be sent by the user directly, which is possible today. It gives users maximum flexibility but it also leaves the heavy-lifting of sending events to them. -| `TaskRun`/`CustomRun` | Controlled by | Trigger | Data Available | Example Events | -|:----------------------|:-------------:|:-------:|:--------------:|:--------------:| -| Before Start | User | Dedicated resource in the Pipeline | Used defined | Any | -| Start | Tekton | Resource Created | Annotations, Parameters | Build Started, TestSuiteRun Started | -| Running | Tekton | Condition Changed, Unknown | Annotations, Parameters | | -| Running | User | Event requested by user | Any data available in the step context | Any | -| End | Tekton | Condition Changed, True or False | Annotations, Parameters, Results | Build Finished, Artifact Packaged, Service Deployed | -| After End | User | Dedicated resource in the Pipeline | Any available in the context, results from previous Tasks| Any | +| `TaskRun`/`CustomRun` | Controlled by | Trigger | Data Available | Example Events | +|:----------------------|:-------------:|:----------------------------------:|:---------------------------------------------------------:|:---------------------------------------------------:| +| Before Start | User | Dedicated resource in the Pipeline | Used defined | Any | +| Start | Tekton | Resource Created | Annotations, Parameters | Build Started, TestSuiteRun Started | +| Running | Tekton | Condition Changed, Unknown | Annotations, Parameters | | +| Running | User | Event requested by user | Any data available in the step context | Any | +| End | Tekton | Condition Changed, True or False | Annotations, Parameters, Results | Build Finished, Artifact Packaged, Service Deployed | +| After End | User | Dedicated resource in the Pipeline | Any available in the context, results from previous Tasks | Any | #### User Interface diff --git a/teps/0138-decouple-api-and-feature-versioning.md b/teps/0138-decouple-api-and-feature-versioning.md index e3a93baa4..a58e127b4 100644 --- a/teps/0138-decouple-api-and-feature-versioning.md +++ b/teps/0138-decouple-api-and-feature-versioning.md @@ -25,7 +25,8 @@ authors: - [Per feature flag for new api-driven features](#per-feature-flag-for-new-api-driven-features) - [Sunset `enable-api-fields` after existing features stabilize](#sunset-`enable-api-fields`-after-existing-features-stabilize) - [Design Evaluation](#design-evaluation) - - [Pros and cons](#pros-and-cons) + - [Pros](#pros) + - [Cons](#cons) - [Alternatives](#alternatives) - [Per feature flag with new value for `enable-api-fields` `none`](#per-feature-flag-with-new-value-for-`enable-api-fields`-none) - [New `legacy-enable-beta-features-by-default` flag](#new-legacy-enable-beta-features-by-default-flag) @@ -165,11 +166,11 @@ See [implementation plan](#implementation-plan) for more details on the PerFeatu All **new** features can only be enabled via per-feature flags. When they first get introduced as `alpha`, they will be disabled by default. When new features get promoted to `stable`, they will be enabled by default according to the following table: -| Feature stability level | Default | -| ----------------------- | -------- | -| Stable | Enabled; Cannot be disabled | -| Beta | Disabled | -| Alpha | Disabled | +| Feature stability level | Default | +|-------------------------|-----------------------------| +| Stable | Enabled; Cannot be disabled | +| Beta | Disabled | +| Alpha | Disabled | Note that per-feature flags that have stabilized cannot be disabled. We will deprecate the per-feature flag after it has become stable and then remove it eventually after the deprecation period according to the API compatibility policy. We will give deprecation and later removel notice of the per-feature flags via release notes after their promotion to `stable`. Cluster operators who do not want such opt-in features would have enough notice to implement admission controllers on their own to disable the feature. For example, when a new future feature `pipeline-in-pipeline` becomes stable in v0.55, it would be enabled by default and cannot be disabled after the release. We would need to include in the release note of v0.55 that we are enabling the `pipeline-in-pipeline` feature by default and deprecating its feature flag. And after the deprecation period, we would remove the feature flag. @@ -272,21 +273,21 @@ This alternative proposes introducing a new flag `legacy-enable-beta-features-by This chart would apply to the existing `beta` features (array results, array indexing, object params and results, and remote resolution): -| enable-api-fields | legacy-enable-beta-features-by-default | enabled in `v1beta1`? | enabled in `v1`? | -| ------ | ------ | --- | --- | -| beta | true | yes | yes | -| beta | false | yes | yes | -| stable | true | yes | no | -| stable | false | no | no | +| enable-api-fields | legacy-enable-beta-features-by-default | enabled in `v1beta1`? | enabled in `v1`? | +|-------------------|-----------------------------------------|-----------------------|------------------| +| beta | true | yes | yes | +| beta | false | yes | yes | +| stable | true | yes | no | +| stable | false | no | no | For new `beta` features(e.g. matrix in the future): -| enable-api-fields | legacy-enable-beta-features-by-default | enabled in `v1beta1`? | enabled in `v1`? | -| ------ | ------ | --- | --- | -| beta | true | yes | yes | -| beta | false | yes | yes | -| stable | true | no | no | -| stable | false | no | no | +| enable-api-fields | legacy-enable-beta-features-by-default | enabled in `v1beta1`? | enabled in `v1`? | +|-------------------|-----------------------------------------|-----------------------|------------------| +| beta | true | yes | yes | +| beta | false | yes | yes | +| stable | true | no | no | +| stable | false | no | no | Once all existing `beta` features become `stable`, `legacy-enable-beta-features-by-default` can be removed. We will deprecate and then remove `legacy-enable-beta-features-by-default` and use `stable` `enable-api-fields`. We would default true for the new flag. After 9 months, we would default `enable-api-fields` to `stable` to preserve the existing behavior. @@ -341,11 +342,11 @@ type FeatureFlag struct { We plan to retain the existing testing matrix for fields at `alpha`, `beta` and `stable`. With the addition of per-feature flags for new features, it would look as follows: -| |enable-api-fields|Per feature flags|Integration tests| -| -- | -- | -- | -- | -|Opt-in stable| stable|Turn OFF all per feature flags|Run all stable e2e tests.| -|Opt-in beta| beta|Turn ON all beta per-feature flags|Run all beta e2e tests.| -|Opt-in alpha| alpha|Turn ON all per-feature flags (alpha and beta)|Run all alpha e2e tests.| +| | enable-api-fields | Per feature flags | Integration tests | +|---------------|-------------------|------------------------------------------------|---------------------------| +| Opt-in stable | stable | Turn OFF all per feature flags | Run all stable e2e tests. | +| Opt-in beta | beta | Turn ON all beta per-feature flags | Run all beta e2e tests. | +| Opt-in alpha | alpha | Turn ON all per-feature flags (alpha and beta) | Run all alpha e2e tests. | ## Additional CI tests @@ -358,11 +359,11 @@ We cannot test 2\*\*N combinations of per-feature flags, since that would be too -||enable-api-fields|Per feature flags|Integration tests| -| :- | :- | :- | :- | -|Opt-in stable|Set to stable|

All feature flags are OFF by default.

Turn ON one feature flag at a time.

|Run a small number of e2e tests against N combinations. It is not feasible to run the entire e2e test suite against N combinations.| -|Opt-in beta|Set to beta|Turn ON all beta per-feature flags by default.

Turn ON one per-feature (flag at an alpha stability level) at a time.|Run a small number of e2e tests against M combinations (where M <= N)| -|Opt-in alpha|Set to alpha|Turn ON all per-feature flags by default.

TURN OFF one feature flag at a time.|Run a small number of e2e tests against N combinations.| +| | enable-api-fields | Per feature flags | Integration tests | +|:--------------|:------------------|:----------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------| +| Opt-in stable | Set to stable |

All feature flags are OFF by default.

Turn ON one feature flag at a time.

| Run a small number of e2e tests against N combinations. It is not feasible to run the entire e2e test suite against N combinations. | +| Opt-in beta | Set to beta | Turn ON all beta per-feature flags by default.

Turn ON one per-feature (flag at an alpha stability level) at a time. | Run a small number of e2e tests against M combinations (where M <= N) | +| Opt-in alpha | Set to alpha | Turn ON all per-feature flags by default.

TURN OFF one feature flag at a time. | Run a small number of e2e tests against N combinations. | ### How many tests can we run in a reasonable amount of time? Based on a [recent PR](https://github.com/tektoncd/pipeline/pull/7032), our integration tests take between 26 mins (stable) → 33 mins (alpha). We don’t want to go beyond that. Based on [Feature flags testing matrix](https://docs.google.com/document/d/1r_MX9-mzRtdbfNQq5VC4guHb-tphA0WxWhlQIsusEEA/edit?resourcekey=0-RALry7-GaKn9i19UEaRnYg) benchmarking, the approximate time is: @@ -377,10 +378,10 @@ Nfeatures = 20 (currently we have 17 api features; lets assume that a Ntasks = 2 (two tasks per pipeline) -|Scenario|Npipelines|Nfeatures|Ntasks/pipeline|T (mins)| -| :- | :- | :- | :- | :- | -|How many pipelines can we afford to run in 30 mins?|**7**|20|2|30| -|How long would it take to run a single (same) pipeline for all the features?|1|20|2|**4**| +| Scenario | Npipelines | Nfeatures | Ntasks/pipeline | T (mins) | +|:-----------------------------------------------------------------------------|:----------------------|:---------------------|:---------------------------|:---------| +| How many pipelines can we afford to run in 30 mins? | **7** | 20 | 2 | 30 | +| How long would it take to run a single (same) pipeline for all the features? | 1 | 20 | 2 | **4** | ### How many tests should we run against the additional tests? diff --git a/teps/0140-producing-results-in-matrix.md b/teps/0140-producing-results-in-matrix.md index ec16b44b7..826b998e0 100644 --- a/teps/0140-producing-results-in-matrix.md +++ b/teps/0140-producing-results-in-matrix.md @@ -28,7 +28,7 @@ see-also: - [Requirements](#requirements) - [Use Cases](#use-cases) - [1. Build and Deploy](#1-build-and-deploy) - - [2. Checking compatibility of a browser extension on various platforms and browsers](#2-checking-compatibility-of-a-browser-extension-on-various--and-) + - [2. Checking compatibility of a browser extension on various platforms and browsers](#2-checking-compatibility-of-a-browser-extension-on-various-platforms-and-browsers) - [Proposal](#proposal) - [Design Details](#design-details) - [Results Cache](#results-cache) @@ -40,7 +40,7 @@ see-also: - [Missing Task Results](#missing-task-results) - [Examples](#examples) - [1. Build and Deploy Images](#1-build-and-deploy-images) - - [2. Checking compatibility of a browser extension on various platforms and browsers](#2-checking-compatibility-of-a-browser-extension-on-various--and--1) + - [2. Checking compatibility of a browser extension on various platforms and browsers](#2-checking-compatibility-of-a-browser-extension-on-various-platforms-and-browsers-1) - [Design Evaluation](#design-evaluation) - [Future Work](#future-work) - [Consuming Individual or Specific Combinations of Results Produced by a Matrixed PipelineTask](#consuming-individual-or-specific-combinations-of-results-produced-by-a-matrixed-pipelinetask) diff --git a/teps/0141-platform-context-variables.md b/teps/0141-platform-context-variables.md index ec7c444ed..5b0e183b6 100644 --- a/teps/0141-platform-context-variables.md +++ b/teps/0141-platform-context-variables.md @@ -25,9 +25,6 @@ collaborators: [] - [Proposal](#proposal) - [Notes and Caveats](#notes-and-caveats) - [Design Details](#design-details) - - [TaskRun and PipelineRun syntax](#taskrun-and-pipelinerun-syntax) - - [Parameter substitution](#parameter-substitution) - - [Provenance generation](#provenance-generation) - [Design Evaluation](#design-evaluation) - [Reusability](#reusability) - [Simplicity](#simplicity) diff --git a/working-groups.md b/working-groups.md index d3f0b733c..44b5178bb 100644 --- a/working-groups.md +++ b/working-groups.md @@ -23,7 +23,7 @@ calendar, and should invite tekton-dev@googlegroups.com to working group meetings. Recordings are made and posted in the minutes of each meeting. -These docs and the recordings themselves are visible to [all members of our mailing list](#mailing-list). +These docs and the recordings themselves are visible to [all members of our mailing list](./contact.md#mailing-list). The current working groups are: @@ -46,7 +46,6 @@ The current working groups are: - [Pipeline](#pipeline) - [Governing Board / Community](#governing-board--community) - [Software Supply Chain Security (s3c)](#software-supply-chain-security-s3c) -- [The goal of this working group is to discuss supply chain security initiatives across Tekton.](#the-goal-of-this-working-group-is-to-discuss-supply-chain-security-initiatives-across-tekton) - [Data Interface](#data-interface) - [Results](#results) @@ -307,24 +306,24 @@ and RedHat's [Pipelines as Code](https://github.com/openshift-pipelines/pipeline | Forum | [tekton-dev@](https://groups.google.com/forum/#!forum/tekton-dev) | | | Slack Channels | [#workflows](https://tektoncd.slack.com/messages/workflows) | -|   | Points of Contact | Company | Profile | -|----------------------------------------------------------|-------------------|---------|--------------------------------------- | -| | Dibyo Mukherjee | Google | [dibyom](https://github.com/dibyom) | -| | Khurram Baig | Red Hat | [khrm](https://github.com/khrm) | -| | Chmouel Boudjnah | Red Hat | [chmouel](https://github.com/chmouel) | -| | Lee Bernick | Google | [lbernick](https://github.com/lbernick)| +|   | Points of Contact | Company | Profile | +|----------------------------------------------------------|-------------------|---------|-----------------------------------------| +| | Dibyo Mukherjee | Google | [dibyom](https://github.com/dibyom) | +| | Khurram Baig | Red Hat | [khrm](https://github.com/khrm) | +| | Chmouel Boudjnah | Red Hat | [chmouel](https://github.com/chmouel) | +| | Lee Bernick | Google | [lbernick](https://github.com/lbernick) | ## Pipeline This is the working group for [`tektoncd/pipeline`](https://github.com/tektoncd/pipeline). Connecting to the Meeting VC requires a Zoom account. -| Artifact | Link | -|----------|-------------------------------------------------------------------| -| Forum | [tekton-dev@](https://groups.google.com/forum/#!forum/tekton-dev) | -| Meeting Notes | [Notes](https://docs.google.com/document/d/1AfJfdyd1JN2P4haBdYOxEn6SMENMgvQF9ps7iF2QSG0/edit) | -| Slack Channel | [#pipeline-dev](https://tektoncd.slack.com/messages/pipeline-dev) | -| Community Meeting VC | [https://zoom.us/j/98272582734?pwd=OTBVMWJIbVJZcUU3WnlodTEvVS9PUT09](https://zoom.us/j/98272582734?pwd=OTBVMWJIbVJZcUU3WnlodTEvVS9PUT09) | +| Artifact | Link | +|----------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Forum | [tekton-dev@](https://groups.google.com/forum/#!forum/tekton-dev) | +| Meeting Notes | [Notes](https://docs.google.com/document/d/1AfJfdyd1JN2P4haBdYOxEn6SMENMgvQF9ps7iF2QSG0/edit) | +| Slack Channel | [#pipeline-dev](https://tektoncd.slack.com/messages/pipeline-dev) | +| Community Meeting VC | [https://zoom.us/j/98272582734?pwd=OTBVMWJIbVJZcUU3WnlodTEvVS9PUT09](https://zoom.us/j/98272582734?pwd=OTBVMWJIbVJZcUU3WnlodTEvVS9PUT09) | | Community Meeting Calendar | Tuesday every other week, 09:30a-10:00a PST
[Calendar](https://calendar.google.com/event?action=TEMPLATE&tmeid=NG5jdWV0ZTFxaGo0MHNpYzVnODlrYXZucGhfMjAyMTExMDJUMTYzMDAwWiBnb29nbGUuY29tX2Qzb3Zjdm8xcDMyMTloOTg5NTczdjk4Zm5zQGc&tmsrc=google.com_d3ovcvo1p3219h989573v98fns%40group.calendar.google.com&scp=ALL) | |   | Facilitators | Company | Profile | @@ -342,12 +341,12 @@ Connecting to the Meeting VC requires a Zoom account. This is the sync meeting for [the Tekton governing board](https://github.com/tektoncd/community/blob/main/governance.md#tekton-governance-committee) and a place to discuss community operations and process. -| Artifact | Link | -|----------|-------------------------------------------------------------------| -| Forum | [tekton-dev@](https://groups.google.com/forum/#!forum/tekton-dev) | -| Meeting Notes | [Notes](https://docs.google.com/document/d/1g2HQk_4ypYEWPX2WjOFa6vOMpwYoRNprph_P09TS1UM/edit) | -| Slack Channel | [#governance](https://tektoncd.slack.com/messages/governance) | -| Community Meeting VC | [Zoom](https://zoom.us/j/96566024785?pwd=WjRKQkNzK1ZDQm9URitaV0w5eVBldz09) | +| Artifact | Link | +|----------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Forum | [tekton-dev@](https://groups.google.com/forum/#!forum/tekton-dev) | +| Meeting Notes | [Notes](https://docs.google.com/document/d/1g2HQk_4ypYEWPX2WjOFa6vOMpwYoRNprph_P09TS1UM/edit) | +| Slack Channel | [#governance](https://tektoncd.slack.com/messages/governance) | +| Community Meeting VC | [Zoom](https://zoom.us/j/96566024785?pwd=WjRKQkNzK1ZDQm9URitaV0w5eVBldz09) | | Community Meeting Calendar | Tuesday every other week, 09:00a-09:30a PST
[Calendar](https://calendar.google.com/event?action=TEMPLATE&tmeid=NjFvcHNib2E2cjNwcGc2dGhnMmY2OGU4YTFfMjAyMjAxMThUMTcwMDAwWiBnb29nbGUuY29tX2Qzb3Zjdm8xcDMyMTloOTg5NTczdjk4Zm5zQGc&tmsrc=google.com_d3ovcvo1p3219h989573v98fns%40group.calendar.google.com&scp=ALL) | |   | Facilitators | Company | Profile | @@ -365,12 +364,12 @@ and a place to discuss community operations and process. The goal of this working group is to discuss supply chain security initiatives across Tekton (exact scope TBD [community#629](https://github.com/tektoncd/community/issues/629)). -| Artifact | Link | -|----------|-------------------------------------------------------------------| -| Forum | [tekton-dev@](https://groups.google.com/forum/#!forum/tekton-dev) | -| Meeting Notes | [HackMD Notes](https://hackmd.io/gFcAZMFMRwuTaZ1i7Y3fSg) | -| Slack Channel | [#s3c-working-group](https://tektoncd.slack.com/messages/s3c-working-group) | -| Community Meeting VC | [https://zoom.us/j/96593435267?pwd=TTNVYUJEQlNzMXlKYjFXcUwzOUZEdz09](https://zoom.us/j/96593435267?pwd=TTNVYUJEQlNzMXlKYjFXcUwzOUZEdz09) | +| Artifact | Link | +|----------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Forum | [tekton-dev@](https://groups.google.com/forum/#!forum/tekton-dev) | +| Meeting Notes | [HackMD Notes](https://hackmd.io/gFcAZMFMRwuTaZ1i7Y3fSg) | +| Slack Channel | [#s3c-working-group](https://tektoncd.slack.com/messages/s3c-working-group) | +| Community Meeting VC | [https://zoom.us/j/96593435267?pwd=TTNVYUJEQlNzMXlKYjFXcUwzOUZEdz09](https://zoom.us/j/96593435267?pwd=TTNVYUJEQlNzMXlKYjFXcUwzOUZEdz09) | | Community Meeting Calendar | Tuesday every other week, 09:00a-09:30a PST
[Calendar](https://calendar.google.com/event?action=TEMPLATE&tmeid=NDFuMjg2OTloYTJrYm1jNGM1dWZiZ3JzdGZfMjAyMjAyMjJUMTcwMDAwWiBjaHJpc3RpZXdpbHNvbkBnb29nbGUuY29t&tmsrc=christiewilson%40google.com&scp=ALL) | * [**Facilitator list in agenda notes**](https://hackmd.io/gFcAZMFMRwuTaZ1i7Y3fSg?view#Facilitator-instructions) @@ -384,7 +383,7 @@ in Tekton. The problem space is scoped in this [document](https://docs.google.co |----------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Forum | [tekton-dev@](https://groups.google.com/forum/#!forum/tekton-dev) | | Meeting Notes | [Notes](https://docs.google.com/document/d/1z2ME1o_XHvqv6cVEeElljvqVqHV8-XwtXdTkNklFU_8/edit) | -| Slack Channel | [#tekton-data-interface](https://tektoncd.slack.com/messages/tekton-data-interface) | +| Slack Channel | [#tekton-data-interface](https://tektoncd.slack.com/messages/tekton-data-interface) | | Community Meeting VC | [https://zoom.us/j/94243917326?pwd=MThrUVVDSnlEU2FNWG10Yk1CcnRlZz09](https://zoom.us/j/94243917326?pwd=MThrUVVDSnlEU2FNWG10Yk1CcnRlZz09) | | Community Meeting Calendar | Wednesday every week, 04:00p-04:30p UTC
[Calendar](https://calendar.google.com/calendar?cid=Z29vZ2xlLmNvbV9kM292Y3ZvMXAzMjE5aDk4OTU3M3Y5OGZuc0Bncm91cC5jYWxlbmRhci5nb29nbGUuY29t) | @@ -392,16 +391,16 @@ in Tekton. The problem space is scoped in this [document](https://docs.google.co This is the working group for [tektoncd/results](https://github.com/tektoncd/results). -| Artifact | Link | -|---------------|---------------------------------------------------------------------------------| -| Forum | [tekton-dev@](https://groups.google.com/forum/#!forum/tekton-dev) | -| Meeting Notes | [Notes](https://docs.google.com/document/d/1GXKq-tc4oJUHdjIe-usHclugVU06sljo4i8BT2OaHbY/edit?usp=sharing) | -| Slack Channel | [#results](https://tektoncd.slack.com/messages/results) | -| Community Meeting VC | [Zoom link](https://zoom.us/j/99187487778?pwd=UGZhOHd3cWJlVFhMTDNTVGxQeG1ndz09#success) | +| Artifact | Link | +|----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Forum | [tekton-dev@](https://groups.google.com/forum/#!forum/tekton-dev) | +| Meeting Notes | [Notes](https://docs.google.com/document/d/1GXKq-tc4oJUHdjIe-usHclugVU06sljo4i8BT2OaHbY/edit?usp=sharing) | +| Slack Channel | [#results](https://tektoncd.slack.com/messages/results) | +| Community Meeting VC | [Zoom link](https://zoom.us/j/99187487778?pwd=UGZhOHd3cWJlVFhMTDNTVGxQeG1ndz09#success) | | Community Meeting Calendar | Thursday every week, 5:30-6:00a PST
[Calendar](https://calendar.google.com/calendar/event?action=TEMPLATE&tmeid=Nmk2cG5jZ2QyYjJuaWFvdDBjYmxhamlhNzRfMjAyMzAyMDJUMTMzMDAwWiBhZGthcGxhbkByZWRoYXQuY29t&tmsrc=adkaplan%40redhat.com&scp=ALL) | |   | Facilitators | Company | Profile | -|-------------------------------------------------------------- |-----------------|---------|---------------------------------------------------| +|---------------------------------------------------------------|-----------------|---------|---------------------------------------------------| | | Emil Natan | Red Hat | [enarha](https://github.com/enarha) | | | Avinal Kumar | Red Hat | [avinal](https://github.com/avinal) | | | Sayan Biswas | Red Hat | [sayan-biswas](https://github.com/sayan-biswas) |