-
Notifications
You must be signed in to change notification settings - Fork 1
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'main' into renovate/mkdocs-material-9.x-lockfile
- Loading branch information
Showing
9 changed files
with
357 additions
and
67 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,10 +1,13 @@ | ||
# Business OKRs defined & measurable | ||
|
||
[Objectives & Key Results(OKRs)](https://www.productboard.com/blog/defining-objectives-and-key-results-for-your-product-team/) | ||
are defined, with clear and inspiring Objectives that align with the company's overall | ||
mission and vision. Key Results are specific, measurable, and quantifiable, providing a | ||
clear path towards achieving the Objectives. OKRs are regularly reviewed and updated as | ||
needed, with a strong commitment to achieving them. | ||
[Objectives & Key Results(OKRs)][okr] are defined, with clear and inspiring | ||
Objectives that align with the company's overall mission and vision. Key Results | ||
are specific, measurable, and quantifiable, providing a clear path towards | ||
achieving the Objectives. OKRs are regularly reviewed and updated as needed, | ||
with a strong commitment to achieving them. | ||
|
||
- _How to Measure:_ All team members understand the OKRs and how their work contributes to | ||
their achievement. The OKRs are logged in the company's OKR tracker. | ||
***How to Measure:*** All team members understand the OKRs and how their work | ||
contributes to their achievement. The OKRs are logged in the company's OKR | ||
tracker. | ||
|
||
[okr]: https://www.productboard.com/blog/defining-objectives-and-key-results-for-your-product-team/ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
# Engineering Defaults | ||
|
||
## Pair Programming | ||
|
||
## Trunk Based Development | ||
|
||
## Small Batch Delivery |
55 changes: 42 additions & 13 deletions
55
docs/human-systems/delivery-metrics/lagging-delivery-indicators.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,31 +1,60 @@ | ||
# DORA Metrics | ||
|
||
The four key delivery metrics from [DORA](https://dora.dev/) are the industry standard for measuring software delivery. We have found that these metrics are essential in modern software delivery. However, these metrics are not absolute and are lagging indicators of how teams are delivering software. | ||
The four key delivery metrics from [DORA](https://dora.dev/) are the industry | ||
standard for measuring software delivery. We have found that these metrics are | ||
essential in modern software delivery. However, these metrics are not absolute | ||
and are lagging indicators of how teams are delivering software. | ||
|
||
## Lead Time for Changes | ||
|
||
Measures the time between merging a code change into trunk and deploying the code change to production. Provides insights into workflow efficiency and bottlenecks. Shorter lead times indicate smoother processes and quicker value delivery. | ||
Measures the time between merging a code change into trunk and deploying the | ||
code change to production. Provides insights into workflow efficiency and | ||
bottlenecks. Shorter lead times indicate smoother processes and quicker value | ||
delivery. | ||
|
||
- _How to Measure:_ Conduct team level Value Stream Map (VSM) to gather time code change goes from commit to production | ||
- _Example:_ team’s lead time is 509.15h (~64 days). Working time is 163.85h (~20 days) | ||
***How to Measure:*** Conduct team level Value Stream Map (VSM) to gather time | ||
code change goes from commit to production | ||
|
||
***Example:*** team’s lead time is 509.15h (~64 days). Working time is 163.85h | ||
(~20 days) | ||
|
||
## Deploy Frequency | ||
|
||
Measures how often code is deployed to Production. Enables rapid iteration and faster time-to-market. Encourages small, incremental changes, reducing the risk of failures. | ||
Measures how often code is deployed to Production. Enables rapid iteration and | ||
faster time-to-market. Encourages small, incremental changes, reducing the risk | ||
of failures. | ||
|
||
***How to Measure:*** Divide the total number of deployments made in a given | ||
time period (e.g., a month) by the total number of days in that period | ||
|
||
- _How to Measure:_ Divide the total number of deployments made in a given time period (e.g., a month) by the total number of days in that period | ||
- _Example:_ If a team deployed code 10 times in a month with 31 days, the deployment frequency would be 10/31 = an average of _0.32 deployments per day_ over the month | ||
***Example:*** If a team deployed code 10 times in a month with 31 days, the | ||
deployment frequency would be 10/31 = an average of *0.32 deployments per day* | ||
over the month | ||
|
||
## Change Failure Rate | ||
|
||
Measures the percentage of deployments that result in failures after it is in production or released to end user. Offers visibility into code quality and stability. Low failure rates signify robust testing and higher software reliability. | ||
Measures the percentage of deployments that result in failures after it is in | ||
production or released to end user. Offers visibility into code quality and | ||
stability. Low failure rates signify robust testing and higher software | ||
reliability. | ||
|
||
- _How to Measure:_ Percentage of code changes that resulted in an incident, rollback, or any type of prod failure. Calculated by counting the number of deployment failures and then dividing by the number of total deployments in a given time period. | ||
- _Example:_ If your team deployed five times this week and one of them resulted in a failure, your change failure rate is 20% | ||
***How to Measure:*** Percentage of code changes that resulted in an incident, | ||
rollback, or any type of prod failure. Calculated by counting the number of | ||
deployment failures and then dividing by the number of total deployments in a | ||
given time period. | ||
|
||
***Example:*** If your team deployed five times this week and one of them | ||
resulted in a failure, your change failure rate is 20% | ||
|
||
## Mean Time to Restore (MTTR) | ||
|
||
Calculates the time needed to recover from a service disruption and highlights the team's ability to detect and resolve issues swiftly. Shorter MTTR reflects strong incident response and system resilience. | ||
Calculates the time needed to recover from a service disruption and highlights | ||
the team's ability to detect and resolve issues swiftly. Shorter MTTR reflects | ||
strong incident response and system resilience. | ||
|
||
***How to Measure:*** Measures time it takes for service to recover from | ||
failure. Calculated by tracking the average time between a time of service | ||
disruption and the moment a fix is deployed. | ||
|
||
- _How to Measure:_ Measures time it takes for service to recover from failure. Calculated by tracking the average time between a time of service disruption and the moment a fix is deployed. | ||
- _Example:_ A team's average time from problem detection to full recovery is 90 minutes over the course of 6 months. | ||
***Example:*** A team's average time from problem detection to full recovery is | ||
90 minutes over the course of 6 months. |
Oops, something went wrong.