Skip to content

Commit

Permalink
Import content updates from google doc. Refactor guidance organization
Browse files Browse the repository at this point in the history
Fixes #10
  • Loading branch information
acramsay committed Jan 15, 2024
1 parent c987d41 commit a5dd4df
Show file tree
Hide file tree
Showing 10 changed files with 189 additions and 242 deletions.
235 changes: 0 additions & 235 deletions docs/delivery-metrics.md

This file was deleted.

21 changes: 21 additions & 0 deletions docs/guidance/metrics/devex-platform.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# DevEx & Platform Metrics

## Platform Metrics

These metrics serve to provide feedback for shared services teams who exist to serve at the pleasure of product teams. This feedback is critical for providing feedback that platform teams are building the right thing and providing an enjoyable experience to their customers. The high level descriptions are generic in this section given the broad context of “platform”.

- **Adoption Rate** - Assess the adoption rate of your reusable components
- **InnerSourcing Contributions** - Assess the number of code contributions to the platform components from contributors outside of the platform team
- **Platform Experience Survey** - Deploy platform specific questions while users are in the context of using your platform. For example, if your platform is CI/CD, issue a brief survey post git push, while the user is in context, waiting on a build.
- **Platform Hotspots** - Gather component level data from support request intake. Use this data to inform where uplift is needed both in documentation and capability. This will reduce your overall support in the given area and allow the team to stay in context.
- **Platform Resiliency** - Leverage the key tenets of observability to achieve SLI’s and SLO’s specific to your platform. Simply put, this is platform uptime.

## Developer Experience (DevEx) Metrics

These metrics serve to provide the overall experience for engineers at your organization. What does it feel like to work here? These metrics are critical for attracting and retaining talent.

- **Onboarding Experience** - Measure the overall onboarding time from gaining hardware to receiving the proper access to be productive. How does your organization's onboarding experience compare to other organizations?
- **Mean Time 1st Commit In Production** - Calculate the delta between new hire start date to the timestamp of the first commit deployed to Production.
- **Time In Context/Flow** - Measure the amount of time spent doing development work vs in meetings & ceremonies
-**Discoverability of Quality Documentation** - How easy is it to find materials you need to do your job.
Use of Well-Known, “Googleable” Technologies - Using well-known or open source technologies over home-grown, bespoke solutions with high learning curves
31 changes: 31 additions & 0 deletions docs/guidance/metrics/lagging-delivery-indicators.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# DORA Metrics

These 4 key delivery metrics from [DORA](https://dora.dev/) are the industry standard for measuring software delivery. We have found that these metrics are effective but lagging indicators of how teams are delivering software.

## Lead Time for Changes

Measures the time between merging a code change into trunk and deploying the code change to production. Provides insights into workflow efficiency and bottlenecks. Shorter lead times indicate smoother processes and quicker value delivery.

- _How to Measure:_ Conduct team level Value Stream Map (VSM) to gather time code change goes from commit to production
- _Example:_ team’s lead time is 509.15h (~64 days). Working time is 163.85h (~20 days)

## Deploy Frequency

Measures how often code is deployed to Production. Enables rapid iteration and faster time-to-market. Encourages small, incremental changes, reducing the risk of failures.

- _How to Measure:_ Divide the total number of deployments made in a given time period (e.g., a month) by the total number of days in that period
- _Example:_ If a team deployed code 10 times in a month with 31 days, the deployment frequency would be 10/31 = an average of _0.32 deployments per day_ over the month

## Change Failure Rate

Measures the percentage of deployments that result in failures after it is in production or released to end user. Offers visibility into code quality and stability. Low failure rates signify robust testing and higher software reliability.

- _How to Measure:_ Percentage of code changes that resulted in an incident, rollback, or any type of prod failure. Calculated by counting the number of deployment failures and then dividing by the number of total deployments in a given time period.
- _Example:_ If your team deployed five times this week and one of them resulted in a failure, your change failure rate is 20%

## Mean Time to Restore (MTTR)

Calculates the time needed to recover from a service disruption and highlights the team's ability to detect and resolve issues swiftly. Shorter MTTR reflects strong incident response and system resilience.

- _How to Measure:_ Measures time it takes for service to recover from failure. Calculated by tracking the average time between a time of service disruption and the moment a fix is deployed.
- _Example:_ A team's average time from problem detection to full recovery is 90 minutes over the course of 6 months.
Loading

0 comments on commit a5dd4df

Please sign in to comment.