The overall process for contributing to Infracost is:
- Check the project board to see if there is something you'd like to work on; these are the issues we'd like to focus on in the near future. There are also other issues that you might like to check; the issue labels should help you to find a `good first issue, or new resources that others have already requested/liked.
- Create a new issue if there's no issue for what you want to work on. Please put as much as details as you think is necessary, the use-case context is especially helpful if you'd like to receive good feedback.
- Add a comment to the issue you're working on to let the rest of the community know.
- Create a fork, commit and push to your fork. Send a pull request (PR) from your fork to this repo with the proposed change. Don't forget to run
make lint
andmake fmt
first. Please include unit and integration tests where applicable. We use Conventional Commits. Commit messages can usually start with "feat(aws): add ...", "feat(google): add ...", or "fix: nil pointer...". This helps us generate a cleaner changelog. - If it's your first PR to the Infracost org, a bot will leave a comment asking you to follow a quick step to sign our Contributor License Agreement.
- We'll review your change and provide feedback.
Install Go dependencies:
make deps
Add your Infracost API key to your .env.local
file:
cat <<EOF >> .env.local
INFRACOST_API_KEY=XXX
EOF
Run the code:
make run ARGS="--path examples/terraform --usage-file=examples/terraform/infracost-usage.yml"
Running all tests takes ~12mins on GitHub Actions so it's faster to only run your tests as you dev and leave CI to run them all:
make test
Exclude integration tests:
make test ARGS="-v -short"
Build:
make build
Make sure to get familiar with the pricing model of the resource first by reading the cloud vendors pricing page. To begin, add a new file in internal/providers/terraform/aws/
as well as an accompanying test file.
package aws
import (
"fmt"
"github.com/infracost/infracost/internal/schema"
"github.com/shopspring/decimal"
)
func GetMyResourceRegistryItem() *schema.RegistryItem {
return &schema.RegistryItem{
Name: "aws_my_resource",
RFunc: NewMyResource,
}
}
func NewMyResource(d *schema.ResourceData, u *schema.UsageData) *schema.Resource {
region := d.Get("region").String()
var instanceCount int64 = 1
costComponents := []*schema.CostComponent{
{
Name: fmt.Sprintf("Instance (on-demand, %s)", "my_instance_type"),
Unit: "hours",
UnitMultiplier: 1,
HourlyQuantity: decimalPtr(decimal.NewFromInt(instanceCount)),
ProductFilter: &schema.ProductFilter{
VendorName: strPtr("aws"),
Region: strPtr(region),
Service: strPtr("AmazonES"),
ProductFamily: strPtr("My AWS Resource family"),
AttributeFilters: []*schema.AttributeFilter{
{Key: "usagetype", ValueRegex: strPtr("/Some Usage type/")},
{Key: "instanceType", Value: strPtr("Some instance type")},
},
},
PriceFilter: &schema.PriceFilter{
PurchaseOption: strPtr("on_demand"),
},
},
}
return &schema.Resource{
Name: d.Address,
CostComponents: costComponents,
}
}
Next append the resource to the registry in internal/providers/terraform/aws/registry.go
.
package aws
import "github.com/infracost/infracost/internal/schema"
var ResourceRegistry []*schema.RegistryItem = []*schema.RegistryItem{
...,
GetMyResourceRegistryItem(),
}
Finally create a temporary terraform file to test your resource and run (no need to commit that):
make run ARGS="--path my_new_terraform/"
When adding your first resource, we recommend you look at one of the existing resources to see how it's done, for example, check the nat_gateway.go resource. You can then review the price_explorer scripts that help you find various pricing service filters, and something called a "priceHash" that you need for writing integration tests.
We distinguish the price of a resource from its cost. Price is the per-unit price advertised by a cloud vendor. The cost of a resource is calculated by multiplying its price by its usage. For example, an EC2 instance might be priced at $0.02 per hour, and if run for 100 hours (its usage), it'll cost $2.00. When adding resources to Infracost, we can always show their price, but if the resource has a usage-based cost component, we can't show its cost. To solve this problem, new resources in Infracost go through two levels of support:
You can add all price components for the resource, even ones that are usage-based, so the price column in the table output is always populated. The hourly and monthly cost for these components will show -
as illustrated in the following output for AWS Lambda. Once this is done, please send a pull-request to this repo so someone can review/merge it. Try to re-use relevant costComponents from other resources where applicable, e.g. notice how the newElasticacheResource
function is used in aws_elasticache_cluster and aws_elasticache_replication_group.
Please use this pull request description as a guide on the level of details to include in your PR, including required integration tests.
NAME MONTHLY QTY UNIT PRICE HOURLY COST MONTHLY COST
aws_lambda_function.hello_world
├─ Requests - 1M requests 0.2000 - -
└─ Duration - GB-seconds 2e-05 - -
Total - -
Infracost supports passing usage data in through a usage YAML file. When adding a new resource we should add an example of how to specify the usage data in infracost-usage-example.yml. This should include an example resource and usage data along with comments detailing what the usage values are. Here's an example of the entry for AWS Lambda:
aws_lambda_function.my_function:
monthly_requests: 100000 # Monthly requests to the Lambda function.
request_duration_ms: 500 # Average duration of each request in milliseconds.
When running infracost with --usage-file path/to/infracost-usage.yml
, Infracost output shows the hourly/monthly cost columns populated with non-zero values:
NAME MONTHLY QTY UNIT PRICE HOURLY COST MONTHLY COST
aws_lambda_function.hello_world
├─ Requests 100 1M requests 0.2000 0.0274 20.0000
└─ Duration 3,750,000 GB-seconds 2e-05 0.0856 62.5001
Total 0.1130 82.5001
Our aim is to make Infracost's output understandable without needing to read separate docs. We try to match the cloud vendor pricing webpages as users have probably seen those before. It's unlikely that users will have looked at the pricing service JSON (which comes from cloud vendors' pricing APIs), or looked at the detailed billing CSVs that can show the pricing service names. Please check this spreadsheet for examples of cost component names and units.
Where a cloud vendor's pricing pages information can be improved for clarify, we'll do that, e.g. on some pricing webpages, AWS mention use "Storage Rate" to describe pricing for "Provisioned IOPS storage", so we use the latter.
The cost component name should not change when the IaC resource params change; anything that can change should be put in brackets, so for example:
General Purpose SSD storage (gp2)
should beStorage (gp2)
as the storage type can change.Outbound data transfer to EqDC2
should beOutbound data transfer (to EqDC2)
as the EqDC2 value changes based on the location.Linux/UNIX (on-demand, m1.small)
should beInstance usage (Linux/UNIX, on-demand, m1.small)
.
In the future, we plan to add a separate field to cost components to hold the metadata in brackets.
The following notes are general guidelines, please leave a comment in your pull request if they don't make sense or they can be improved for the resource you're adding.
-
references to other resources: if you need access to other resources referenced by the resource you're adding, you can specify
ReferenceAttributes
. For example the aws_ebs_snapshot uses this because its price depends on the size of the referenced volume. -
count: do not include the count in the cost component name or in brackets. Terraform's
count
replicates a resource inplan.json
file. If something likedesired_count
or other cost-related count parameter is included in theplan.json
file, do use count when calculating the HourlyQuantity/MonthlyQuantity so each line-item in the Infracost output shows the total price/cost for that line-item. -
units:
-
use plural, e.g. hours, months, requests, GB-months, GB (already plural). For a "unit per something", use singular per time unit, e.g. use Per GB per hour. Where it makes sense, instead of "API calls" use "API requests" or "requests" for better consistency.
-
for things where the Terraform resource represents 1 unit, e.g. an
aws_instance
, anaws_secretsmanager_secret
and agoogle_dns_managed_zone
, the units should be months (or hours if that makes more sense). For everything else, the units should be whatever is being charged for, e.g. queries, requests. -
for data transferred, where you pay for the data per GB, then use
GB
. For storage, where you pay per GB per month, then useGB-months
. You'll probably see that the Cloud Pricing API's units to use a similar logic. The AWS pricing pages sometimes use a different one than their own pricing API, in that case the pricing API is a better guide.
-
-
unit multiplier: when adding a
costComponent
, set theUnitMultiplier
to 1 unless the price is for a large number, e.g. set it to1000000
if the price should be shown "per 1M requests" in the output. -
tiers in names: use the K postfix for thousand, M for million, B for billion and T for trillion, e.g. "Requests (first 300M)" and "Messages (first 1B)". Use the words "first", "next" and "over" when describing tiers. Units should not be included in brackets unless the cost component relates to storage or data transfer, e.g. "Storage (first 1TB) GB" is more understandable than "Storage (first 1K) GB" since users understand terabytes and petabytes. You should be able to use the
CalculateTierBuckets
method for calculating tier buckets. -
purchase options: if applicable, include "on-demand" in brackets after the cost component name, e.g.
Database instance (on-demand
-
instance type: if applicable, include it in brackets as the 2nd argument, after the cost component name, e.g.
Database instance (on-demand, db.t3.medium)
-
storage type: if applicable, include the storage type in brackets in lower case, e.g.
General purpose storage (gp2)
. -
upper/lower case: cost component names should start with a capital letter and use capital letters for acronyms, unless the acronym refers to a type used by the cloud vendor, for example,
General purpose storage (gp2)
(asgp2
is a type used by AWS) andProvisioned IOPS storage
. -
unnecessary words: drop the following words from cost component names if the cloud vendor's pricing webpage shows them: "Rate" "Volumes", "SSD", "HDD"
-
brackets: only use 1 set of brackets after a component name, e.g.
Database instance (on-demand, db.t3.medium)
and notDatabase instance (on-demand) (db.t3.medium)
-
free resources: if there are certain conditions that can be checked inside a resource Go file, which mean there are no cost components for the resource, return a
NoPrice: true
andIsSkipped: true
response as shown below.// Gateway endpoints don't have a cost associated with them if vpcEndpointType == "Gateway" { return &schema.Resource{ NoPrice: true, IsSkipped: true, } }
-
unsupported resources: if there are certain conditions that can be checked inside a resource Go file, which mean that the resource is not yet supported, log a warning to explain what is not supported and return a
nil
response as shown below.if d.Get("placement_tenancy").String() == "host" { log.Warnf("Skipping resource %s. Infracost currently does not support host tenancy for AWS Launch Configurations", d.Address) return nil }
-
to conditionally set values based on the Terraform resource values, first check
d.Get("value_name").Type != gjson.Null
like in the google_container_cluster resource. In the past we used.Exists()
but this only checks that the key does not exist in the Terraform JSON, not that the key exists and is set to null. -
use
IgnoreIfMissingPrice: true
if you need to lookup a price in the Cloud Pricing API and NOT add it if there is no price. We use it for EBS Optimized instances since we don't know if they should have that cost component without looking it up.
-
Where possible use similar terminology as the cloud vendor's pricing pages, their cost calculators might also help.
-
Do not prefix things with
average_
as in the future we might want to use nested values, e.g.request_duration_ms.max
. -
Use the following units and keep them lower-case:
- time: ms, secs, mins, hrs, days, weeks, months
- size: b, kb, mb, gb, tb
-
Put the units last, e.g.
message_size_kb
,request_duration_ms
. -
For resources that are continuous in time, do not use prefixes, e.g. use
instances
,subscriptions
,storage_gb
. For non-continuous resources, prefix withmonthly_
so users knows what time interval to estimate for, e.g.monthly_log_lines
,monthly_requests
. -
When the field accepts a string (e.g.
dx_connection_type: dedicated
), the values should be used in a case-insensitive way in the resource file, theValueRegex
option can be used with/i
to allow case-insensitive regex matches. For example{Key: "connectionType", ValueRegex: strPtr(fmt.Sprintf("/%s/i", connectionType))},
.
- If the resource has a
zone
key, if they have a zone key, use this logic to get the region:region := d.Get("region").String() zone := d.Get("zone").String() if zone != "" { region = zoneToRegion(zone) }
@alikhajeh1 and @aliscott rotate release responsibilities between them.
-
In here, click on the "Go" build for the master branch, click on Build, expand Test, then use the "Search logs" box to find any line that has "Multiple products found", "No products found for" or "No prices found for". Update the resource files in question to fix these error, often it's because the price filters need to be adjusted to only return 1 result.
-
In the infracost repo, run
git tag vx.y.z && git push origin vx.y.z
-
Wait for the GH Actions to complete, the newly created draft release should have the darwin-amd64, darwin-arm64.tar.gz, windows-amd64.tar.gz, and linux-amd64.tar.gz assets.
-
Click on the Edit draft button, add the release notes from the commits between this and the last release and click on publish.
-
In the
infracost-atlantis
repo, run the following steps so the Atlantis integration uses the latest version of Infracost:# you can also push to master if you want the GH Action to do the following. git pull docker build --no-cache -t infracost/infracost-atlantis:latest . docker push infracost/infracost-atlantis:latest
-
Wait for the infracost brew PR to be merged.
-
Announce the release in the infracost-community Slack announcements channel.
-
Update the docs repo with any required changes and supported resources.
-
Close addressed issues and tag anyone who liked/commented in them to tell them it's live in version X.
If a new flag/feature is added that requires CI support, update the repos mentioned here. For the GitHub Action, a new tag is needed and the release should be published on the GitHub Marketplace. For the CircleCI orb, the readme mentions the commit prefix that triggers releases to the CircleCI orb marketplace.