-# extension/oauth2clientauthextension/ @open-telemetry/collector-contrib-approvers @pavankrish123 @jpkrohling
-# Path separator and minimum of 1 space between component path and owners is
-# important for validation steps
-#
* @open-telemetry/collector-contrib-approvers
-cmd/configschema/ @open-telemetry/collector-contrib-approvers @mx-psi @dmitryax @pmcollins
+cmd/configschema/ @open-telemetry/collector-contrib-approvers @mx-psi @dmitryax
+cmd/githubgen/ @open-telemetry/collector-contrib-approvers @atoulme
cmd/mdatagen/ @open-telemetry/collector-contrib-approvers @dmitryax
cmd/opampsupervisor/ @open-telemetry/collector-contrib-approvers @evan-bradley @atoulme @tigrannajaryan
cmd/otelcontribcol/ @open-telemetry/collector-contrib-approvers
@@ -28,202 +24,229 @@ cmd/oteltestbedcol/ @open-telemetry/collect
cmd/telemetrygen/ @open-telemetry/collector-contrib-approvers @mx-psi @codeboten
confmap/provider/s3provider/ @open-telemetry/collector-contrib-approvers @Aneurysm9
+confmap/provider/secretsmanagerprovider/ @open-telemetry/collector-contrib-approvers @driverpt @atoulme
connector/countconnector/ @open-telemetry/collector-contrib-approvers @djaglowski @jpkrohling
-connector/routingconnector/ @open-telemetry/collector-contrib-approvers @jpkrohling @kovrus @mwear
+connector/datadogconnector/ @open-telemetry/collector-contrib-approvers @mx-psi @dineshg13
+connector/exceptionsconnector/ @open-telemetry/collector-contrib-approvers @jpkrohling @marctc
+connector/failoverconnector/ @open-telemetry/collector-contrib-approvers @akats7 @djaglowski @fatsheep9146
+connector/routingconnector/ @open-telemetry/collector-contrib-approvers @jpkrohling @mwear
connector/servicegraphconnector/ @open-telemetry/collector-contrib-approvers @jpkrohling @mapno
-connector/spanmetricsconnector/ @open-telemetry/collector-contrib-approvers @albertteoh @kovrus
+connector/spanmetricsconnector/ @open-telemetry/collector-contrib-approvers @portertech
examples/demo/ @open-telemetry/collector-contrib-approvers @open-telemetry/collector-approvers
-exporter/alibabacloudlogserviceexporter/ @open-telemetry/collector-contrib-approvers @shabicheng @kongluoxing @qiansheng91
-exporter/awscloudwatchlogsexporter/ @open-telemetry/collector-contrib-approvers @boostchicken
-exporter/awsemfexporter/ @open-telemetry/collector-contrib-approvers @Aneurysm9 @shaochengwang @mxiamxia
+exporter/alertmanagerexporter/ @open-telemetry/collector-contrib-approvers @jpkrohling @sokoide @mcube8
+exporter/awscloudwatchlogsexporter/ @open-telemetry/collector-contrib-approvers @boostchicken @bryan-aguilar @rapphil
+exporter/awsemfexporter/ @open-telemetry/collector-contrib-approvers @Aneurysm9 @shaochengwang @mxiamxia @bryan-aguilar
exporter/awskinesisexporter/ @open-telemetry/collector-contrib-approvers @Aneurysm9 @MovieStoreGuy
exporter/awss3exporter/ @open-telemetry/collector-contrib-approvers @atoulme @pdelewski
exporter/awsxrayexporter/ @open-telemetry/collector-contrib-approvers @wangzlei @srprash
+exporter/azuredataexplorerexporter/ @open-telemetry/collector-contrib-approvers @asaharn @ag-ramachandran
exporter/azuremonitorexporter/ @open-telemetry/collector-contrib-approvers @pcwiese
-exporter/azuredataexplorerexporter/ @open-telemetry/collector-contrib-approvers @asaharan @ag-ramachandran
-exporter/clickhouseexporter/ @open-telemetry/collector-contrib-approvers @hanjm @dmitryax @Frapschen
exporter/carbonexporter/ @open-telemetry/collector-contrib-approvers @aboguszewski-sumo
exporter/cassandraexporter/ @open-telemetry/collector-contrib-approvers @atoulme @emreyalvac
-exporter/coralogixexporter/ @open-telemetry/collector-contrib-approvers @oded-dd @povilasv @matej-g
-exporter/datadogexporter/ @open-telemetry/collector-contrib-approvers @mx-psi @gbbr @dineshg13 @liustanley @songy23
-exporter/datasetexporter/ @open-telemetry/collector-contrib-approvers @atoulme @martin-majlis-s1 @zdaratom
+exporter/clickhouseexporter/ @open-telemetry/collector-contrib-approvers @hanjm @dmitryax @Frapschen
+exporter/coralogixexporter/ @open-telemetry/collector-contrib-approvers @povilasv @matej-g
+exporter/datadogexporter/ @open-telemetry/collector-contrib-approvers @mx-psi @dineshg13 @liustanley @songy23 @mackjmr
+exporter/datasetexporter/ @open-telemetry/collector-contrib-approvers @atoulme @martin-majlis-s1 @zdaratom-s1 @tomaz-s1
exporter/dynatraceexporter/ @open-telemetry/collector-contrib-approvers @dyladan @arminru @evan-bradley
exporter/elasticsearchexporter/ @open-telemetry/collector-contrib-approvers @JaredTan95
-exporter/f5cloudexporter/ @open-telemetry/collector-contrib-approvers @gramidt
+exporter/f5cloudexporter/ @open-telemetry/collector-contrib-approvers
exporter/fileexporter/ @open-telemetry/collector-contrib-approvers @atingchen
exporter/googlecloudexporter/ @open-telemetry/collector-contrib-approvers @aabmass @dashpole @jsuereth @punya @damemi @psx95
-exporter/googlemanagedprometheusexporter/ @open-telemetry/collector-contrib-approvers @aabmass @dashpole @jsuereth @punya @damemi @psx95
exporter/googlecloudpubsubexporter/ @open-telemetry/collector-contrib-approvers @alexvanboxel
+exporter/googlemanagedprometheusexporter/ @open-telemetry/collector-contrib-approvers @aabmass @dashpole @jsuereth @punya @damemi @psx95
+exporter/honeycombmarkerexporter/ @open-telemetry/collector-contrib-approvers @TylerHelmuth @fchikwekwe
exporter/influxdbexporter/ @open-telemetry/collector-contrib-approvers @jacobmarble
exporter/instanaexporter/ @open-telemetry/collector-contrib-approvers @jpkrohling @hickeyma
-exporter/jaegerexporter/ @open-telemetry/collector-contrib-approvers @jpkrohling @frzifus
-exporter/jaegerthrifthttpexporter/ @open-telemetry/collector-contrib-approvers @jpkrohling @pavolloffay @frzifus
exporter/kafkaexporter/ @open-telemetry/collector-contrib-approvers @pavolloffay @MovieStoreGuy
+exporter/kineticaexporter/ @open-telemetry/collector-contrib-approvers @am-kinetica @TylerHelmuth
exporter/loadbalancingexporter/ @open-telemetry/collector-contrib-approvers @jpkrohling
exporter/logicmonitorexporter/ @open-telemetry/collector-contrib-approvers @bogdandrutu @khyatigandhi6 @avadhut123pisal
-exporter/logzioexporter/ @open-telemetry/collector-contrib-approvers @Doron-Bargo @yotamloe
-exporter/lokiexporter/ @open-telemetry/collector-contrib-approvers @gramidt @gouthamve @jpkrohling @kovrus @mar4uk
+exporter/logzioexporter/ @open-telemetry/collector-contrib-approvers @yotamloe
+exporter/lokiexporter/ @open-telemetry/collector-contrib-approvers @gramidt @gouthamve @jpkrohling @mar4uk
exporter/mezmoexporter/ @open-telemetry/collector-contrib-approvers @dashpole @billmeyer @gjanco
exporter/opencensusexporter/ @open-telemetry/collector-contrib-approvers @open-telemetry/collector-approvers
-exporter/parquetexporter/ @open-telemetry/collector-contrib-approvers @atoulme
+exporter/opensearchexporter/ @open-telemetry/collector-contrib-approvers @Aneurysm9 @MitchellGale @MaxKsyunz @YANG-DB
+exporter/otelarrowexporter/ @open-telemetry/collector-contrib-approvers @jmacd @moh-osman3
exporter/prometheusexporter/ @open-telemetry/collector-contrib-approvers @Aneurysm9
exporter/prometheusremotewriteexporter/ @open-telemetry/collector-contrib-approvers @Aneurysm9 @rapphil
-exporter/pulsarexporter/ @open-telemetry/collector-contrib-approvers @dmitryax @tjiuming
+exporter/pulsarexporter/ @open-telemetry/collector-contrib-approvers @dmitryax @dao-jun
exporter/sapmexporter/ @open-telemetry/collector-contrib-approvers @dmitryax @atoulme
exporter/sentryexporter/ @open-telemetry/collector-contrib-approvers @AbhiPrasad
exporter/signalfxexporter/ @open-telemetry/collector-contrib-approvers @dmitryax @crobert-1
-exporter/skywalkingexporter/ @open-telemetry/collector-contrib-approvers @liqiangz
exporter/splunkhecexporter/ @open-telemetry/collector-contrib-approvers @atoulme @dmitryax
exporter/sumologicexporter/ @open-telemetry/collector-contrib-approvers @sumo-drosiek
exporter/syslogexporter/ @open-telemetry/collector-contrib-approvers @kkujawa-sumo @rnishtala-sumo @astencel-sumo
-exporter/tanzuobservabilityexporter/ @open-telemetry/collector-contrib-approvers @oppegard @thepeterstone @keep94
exporter/tencentcloudlogserviceexporter/ @open-telemetry/collector-contrib-approvers @wgliang @yiyang5055
exporter/zipkinexporter/ @open-telemetry/collector-contrib-approvers @MovieStoreGuy @astencel-sumo @crobert-1
+extension/ackextension/ @open-telemetry/collector-contrib-approvers @zpzhuSplunk @splunkericl @atoulme
extension/asapauthextension/ @open-telemetry/collector-contrib-approvers @jamesmoessis @MovieStoreGuy
extension/awsproxy/ @open-telemetry/collector-contrib-approvers @Aneurysm9 @mxiamxia
-extension/basicauthextension/ @open-telemetry/collector-contrib-approvers @jpkrohling @svrakitin
-extension/bearertokenauthextension/ @open-telemetry/collector-contrib-approvers @jpkrohling @pavankrish123 @frzifus
-extension/encodingextension/ @open-telemetry/collector-contrib-approvers @MovieStoreGuy
-extension/headerssetterextension/ @open-telemetry/collector-contrib-approvers @jpkrohling @kovrus
+extension/basicauthextension/ @open-telemetry/collector-contrib-approvers @jpkrohling @frzifus
+extension/bearertokenauthextension/ @open-telemetry/collector-contrib-approvers @jpkrohling @frzifus
+extension/encoding/ @open-telemetry/collector-contrib-approvers @atoulme @dao-jun @dmitryax @MovieStoreGuy @VihasMakwana
+extension/encoding/jaegerencodingextension/ @open-telemetry/collector-contrib-approvers @MovieStoreGuy @atoulme
+extension/encoding/jsonlogencodingextension/ @open-telemetry/collector-contrib-approvers @VihasMakwana @atoulme
+extension/encoding/otlpencodingextension/ @open-telemetry/collector-contrib-approvers @dao-jun @VihasMakwana
+extension/encoding/textencodingextension/ @open-telemetry/collector-contrib-approvers @MovieStoreGuy @atoulme
+extension/encoding/zipkinencodingextension/ @open-telemetry/collector-contrib-approvers @MovieStoreGuy @dao-jun
+extension/headerssetterextension/ @open-telemetry/collector-contrib-approvers @jpkrohling
extension/healthcheckextension/ @open-telemetry/collector-contrib-approvers @jpkrohling
extension/httpforwarder/ @open-telemetry/collector-contrib-approvers @atoulme @rmfitzpatrick
-extension/jaegerremotesampling/ @open-telemetry/collector-contrib-approvers @jpkrohling @frzifus
+extension/httpforwarderextension/ @open-telemetry/collector-contrib-approvers @atoulme @rmfitzpatrick
+extension/jaegerremotesampling/ @open-telemetry/collector-contrib-approvers @yurishkuro @frzifus
extension/oauth2clientauthextension/ @open-telemetry/collector-contrib-approvers @pavankrish123 @jpkrohling
extension/observer/ @open-telemetry/collector-contrib-approvers @dmitryax @rmfitzpatrick
extension/observer/dockerobserver/ @open-telemetry/collector-contrib-approvers @MovieStoreGuy
+extension/observer/ecsobserver/ @open-telemetry/collector-contrib-approvers @dmitryax @rmfitzpatrick
extension/observer/ecstaskobserver/ @open-telemetry/collector-contrib-approvers @rmfitzpatrick
extension/observer/hostobserver/ @open-telemetry/collector-contrib-approvers @MovieStoreGuy
extension/observer/k8sobserver/ @open-telemetry/collector-contrib-approvers @rmfitzpatrick @dmitryax
extension/oidcauthextension/ @open-telemetry/collector-contrib-approvers @jpkrohling
+extension/opampextension/ @open-telemetry/collector-contrib-approvers @portertech @evan-bradley @tigrannajaryan
extension/pprofextension/ @open-telemetry/collector-contrib-approvers @MovieStoreGuy
+extension/remotetapextension/ @open-telemetry/collector-contrib-approvers @atoulme
extension/sigv4authextension/ @open-telemetry/collector-contrib-approvers @Aneurysm9 @erichsueh3
+extension/solarwindsapmsettingsextension/ @open-telemetry/collector-contrib-approvers @jerrytfleung @cheempz
extension/storage/ @open-telemetry/collector-contrib-approvers @dmitryax @atoulme @djaglowski
extension/storage/dbstorage/ @open-telemetry/collector-contrib-approvers @dmitryax @atoulme
extension/storage/filestorage/ @open-telemetry/collector-contrib-approvers @djaglowski
+extension/sumologicextension/ @open-telemetry/collector-contrib-approvers @astencel-sumo @sumo-drosiek @swiatekm-sumo
internal/aws/ @open-telemetry/collector-contrib-approvers @Aneurysm9 @mxiamxia
+internal/collectd/ @open-telemetry/collector-contrib-approvers @atoulme
+internal/coreinternal/ @open-telemetry/collector-contrib-approvers @open-telemetry/collector-approvers
+internal/datadog/ @open-telemetry/collector-contrib-approvers @mx-psi @dineshg13
internal/docker/ @open-telemetry/collector-contrib-approvers @rmfitzpatrick @jamesmoessis
+internal/exp/metrics/ @open-telemetry/collector-contrib-approvers @sh0rez @RichieSams
+internal/filter/ @open-telemetry/collector-contrib-approvers @open-telemetry/collector-approvers
internal/k8sconfig/ @open-telemetry/collector-contrib-approvers @dmitryax
internal/k8stest/ @open-telemetry/collector-contrib-approvers @crobert-1
+internal/kafka/ @open-telemetry/collector-contrib-approvers @pavolloffay @MovieStoreGuy
internal/kubelet/ @open-telemetry/collector-contrib-approvers @dmitryax
internal/metadataproviders/ @open-telemetry/collector-contrib-approvers @Aneurysm9 @dashpole
+internal/sharedcomponent/ @open-telemetry/collector-contrib-approvers @open-telemetry/collector-approvers
internal/splunk/ @open-telemetry/collector-contrib-approvers @dmitryax
+internal/sqlquery/ @open-telemetry/collector-contrib-approvers @crobert-1 @dmitryax
internal/tools/ @open-telemetry/collector-contrib-approvers
-internal/coreinternal/ @open-telemetry/collector-contrib-approvers @open-telemetry/collector-approvers
-internal/filter/ @open-telemetry/collector-contrib-approvers @open-telemetry/collector-approvers
-internal/sharedcomponent/ @open-telemetry/collector-contrib-approvers @open-telemetry/collector-approvers
+pkg/batchperresourceattr/ @open-telemetry/collector-contrib-approvers @atoulme @dmitryax
pkg/batchpersignal/ @open-telemetry/collector-contrib-approvers @jpkrohling
+pkg/experimentalmetricmetadata/ @open-telemetry/collector-contrib-approvers @rmfitzpatrick
+pkg/golden/ @open-telemetry/collector-contrib-approvers @djaglowski @atoulme
pkg/ottl/ @open-telemetry/collector-contrib-approvers @TylerHelmuth @kentquirk @bogdandrutu @evan-bradley
pkg/pdatatest/ @open-telemetry/collector-contrib-approvers @djaglowski @fatsheep9146
pkg/pdatautil/ @open-telemetry/collector-contrib-approvers @dmitryax
pkg/resourcetotelemetry/ @open-telemetry/collector-contrib-approvers @mx-psi
+pkg/sampling/ @open-telemetry/collector-contrib-approvers @kentquirk @jmacd
pkg/stanza/ @open-telemetry/collector-contrib-approvers @djaglowski
-pkg/translator/jaeger/ @open-telemetry/collector-contrib-approvers @open-telemetry/collector-approvers
-pkg/translator/loki/ @open-telemetry/collector-contrib-approvers @gouthamve @jpkrohling @kovrus @mar4uk
+pkg/translator/azure/ @open-telemetry/collector-contrib-approvers @open-telemetry/collector-approvers @atoulme @cparkins
+pkg/translator/jaeger/ @open-telemetry/collector-contrib-approvers @open-telemetry/collector-approvers @frzifus
+pkg/translator/loki/ @open-telemetry/collector-contrib-approvers @gouthamve @jpkrohling @mar4uk
pkg/translator/opencensus/ @open-telemetry/collector-contrib-approvers @open-telemetry/collector-approvers
pkg/translator/prometheus/ @open-telemetry/collector-contrib-approvers @dashpole @bertysentry
-pkg/translator/prometheusremotewrite/ @open-telemetry/collector-contrib-approvers @Aneurysm9 @kovrus
+pkg/translator/prometheusremotewrite/ @open-telemetry/collector-contrib-approvers @Aneurysm9
pkg/translator/signalfx/ @open-telemetry/collector-contrib-approvers @dmitryax
+pkg/translator/skywalking/ @open-telemetry/collector-contrib-approvers @JaredTan95
pkg/translator/zipkin/ @open-telemetry/collector-contrib-approvers @MovieStoreGuy @astencel-sumo @crobert-1
-pkg/winperfcounters/ @open-telemetry/collector-contrib-approvers @dashpole @mrod1598 @binaryfissiongames
-pkg/batchperresourceattr/ @open-telemetry/collector-contrib-approvers @atoulme @dmitryax
-pkg/experimentalmetricmetadata/ @open-telemetry/collector-contrib-approvers @rmfitzpatrick
+pkg/winperfcounters/ @open-telemetry/collector-contrib-approvers @dashpole @Mrod1598 @BinaryFissionGames
processor/attributesprocessor/ @open-telemetry/collector-contrib-approvers @boostchicken
processor/cumulativetodeltaprocessor/ @open-telemetry/collector-contrib-approvers @TylerHelmuth
-processor/datadogprocessor/ @open-telemetry/collector-contrib-approvers @mx-psi @gbbr @dineshg13
+processor/deltatocumulativeprocessor/ @open-telemetry/collector-contrib-approvers @sh0rez @RichieSams
processor/deltatorateprocessor/ @open-telemetry/collector-contrib-approvers @Aneurysm9
processor/filterprocessor/ @open-telemetry/collector-contrib-approvers @TylerHelmuth @boostchicken
processor/groupbyattrsprocessor/ @open-telemetry/collector-contrib-approvers @rnishtala-sumo
processor/groupbytraceprocessor/ @open-telemetry/collector-contrib-approvers @jpkrohling
-processor/k8sattributesprocessor/ @open-telemetry/collector-contrib-approvers @dmitryax @rmfitzpatrick @fatsheep9146
+processor/intervalprocessor/ @open-telemetry/collector-contrib-approvers @RichieSams
+processor/k8sattributesprocessor/ @open-telemetry/collector-contrib-approvers @dmitryax @rmfitzpatrick @fatsheep9146 @TylerHelmuth
processor/logstransformprocessor/ @open-telemetry/collector-contrib-approvers @djaglowski @dehaansa
processor/metricsgenerationprocessor/ @open-telemetry/collector-contrib-approvers @Aneurysm9
processor/metricstransformprocessor/ @open-telemetry/collector-contrib-approvers @dmitryax
processor/probabilisticsamplerprocessor/ @open-telemetry/collector-contrib-approvers @jpkrohling
-processor/redactionprocessor/ @open-telemetry/collector-contrib-approvers @leonsp-ai @dmitryax @mx-psi
+processor/redactionprocessor/ @open-telemetry/collector-contrib-approvers @dmitryax @mx-psi @TylerHelmuth
+processor/remotetapprocessor/ @open-telemetry/collector-contrib-approvers @atoulme
processor/resourcedetectionprocessor/ @open-telemetry/collector-contrib-approvers @Aneurysm9 @dashpole
-processor/resourceprocessor/ @open-telemetry/collector-contrib-approvers @dmitryax
processor/resourcedetectionprocessor/internal/azure/ @open-telemetry/collector-contrib-approvers @mx-psi
processor/resourcedetectionprocessor/internal/heroku/ @open-telemetry/collector-contrib-approvers @atoulme
processor/resourcedetectionprocessor/internal/openshift/ @open-telemetry/collector-contrib-approvers @frzifus
-processor/remoteobserverprocessor/ @open-telemetry/collector-contrib-approvers @pmcollins
-processor/routingprocessor/ @open-telemetry/collector-contrib-approvers @jpkrohling @kovrus
+processor/resourceprocessor/ @open-telemetry/collector-contrib-approvers @dmitryax
+processor/routingprocessor/ @open-telemetry/collector-contrib-approvers @jpkrohling
processor/schemaprocessor/ @open-telemetry/collector-contrib-approvers @MovieStoreGuy
-processor/servicegraphprocessor/ @open-telemetry/collector-contrib-approvers @jpkrohling @mapno
-processor/spanmetricsprocessor/ @open-telemetry/collector-contrib-approvers @albertteoh
+processor/spanmetricsprocessor/ @open-telemetry/collector-contrib-approvers
processor/spanprocessor/ @open-telemetry/collector-contrib-approvers @boostchicken
+processor/sumologicprocessor/ @open-telemetry/collector-contrib-approvers @aboguszewski-sumo @astencel-sumo @sumo-drosiek
processor/tailsamplingprocessor/ @open-telemetry/collector-contrib-approvers @jpkrohling
processor/transformprocessor/ @open-telemetry/collector-contrib-approvers @TylerHelmuth @kentquirk @bogdandrutu @evan-bradley
-receiver/azureeventhubreceiver/ @open-telemetry/collector-contrib-approvers @atoulme @djaglowski
-receiver/activedirectorydsreceiver/ @open-telemetry/collector-contrib-approvers @djaglowski @binaryfissiongames
+receiver/activedirectorydsreceiver/ @open-telemetry/collector-contrib-approvers @djaglowski @BinaryFissionGames
receiver/aerospikereceiver/ @open-telemetry/collector-contrib-approvers @djaglowski @antonblock
receiver/apachereceiver/ @open-telemetry/collector-contrib-approvers @djaglowski
receiver/apachesparkreceiver/ @open-telemetry/collector-contrib-approvers @djaglowski @Caleb-Hurshman @mrsillydog
+receiver/awscloudwatchmetricsreceiver/ @open-telemetry/collector-contrib-approvers @jpkrohling
receiver/awscloudwatchreceiver/ @open-telemetry/collector-contrib-approvers @djaglowski @schmikei
-receiver/awscloudwatchmetricsreceiver/ @open-telemetry/collector-contrib-approvers @jpkrohling @kovrus
receiver/awscontainerinsightreceiver/ @open-telemetry/collector-contrib-approvers @Aneurysm9 @pxaws
receiver/awsecscontainermetricsreceiver/ @open-telemetry/collector-contrib-approvers @Aneurysm9
receiver/awsfirehosereceiver/ @open-telemetry/collector-contrib-approvers @Aneurysm9
receiver/awsxrayreceiver/ @open-telemetry/collector-contrib-approvers @wangzlei @srprash
receiver/azureblobreceiver/ @open-telemetry/collector-contrib-approvers @eedorenko @mx-psi
-receiver/azuremonitorreceiver/ @open-telemetry/collector-contrib-approvers @altuner @codeboten
+receiver/azureeventhubreceiver/ @open-telemetry/collector-contrib-approvers @atoulme @djaglowski @cparkins
+receiver/azuremonitorreceiver/ @open-telemetry/collector-contrib-approvers @nslaughter @codeboten
receiver/bigipreceiver/ @open-telemetry/collector-contrib-approvers @djaglowski @StefanKurek
receiver/carbonreceiver/ @open-telemetry/collector-contrib-approvers @aboguszewski-sumo
receiver/chronyreceiver/ @open-telemetry/collector-contrib-approvers @MovieStoreGuy @jamesmoessis
receiver/cloudflarereceiver/ @open-telemetry/collector-contrib-approvers @dehaansa @djaglowski
-receiver/cloudfoundryreceiver/ @open-telemetry/collector-contrib-approvers @agoallikmaa @pellared @crobert-1
+receiver/cloudfoundryreceiver/ @open-telemetry/collector-contrib-approvers @pellared @crobert-1
receiver/collectdreceiver/ @open-telemetry/collector-contrib-approvers @atoulme
receiver/couchdbreceiver/ @open-telemetry/collector-contrib-approvers @djaglowski
receiver/datadogreceiver/ @open-telemetry/collector-contrib-approvers @boostchicken @gouthamve @jpkrohling @MovieStoreGuy
receiver/dockerstatsreceiver/ @open-telemetry/collector-contrib-approvers @rmfitzpatrick @jamesmoessis
-receiver/elasticsearchreceiver/ @open-telemetry/collector-contrib-approvers @djaglowski @binaryfissiongames
+receiver/elasticsearchreceiver/ @open-telemetry/collector-contrib-approvers @djaglowski @BinaryFissionGames
receiver/expvarreceiver/ @open-telemetry/collector-contrib-approvers @jamesmoessis @MovieStoreGuy
receiver/filelogreceiver/ @open-telemetry/collector-contrib-approvers @djaglowski
-receiver/filereceiver/ @open-telemetry/collector-contrib-approvers @pmcollins @djaglowski
receiver/filestatsreceiver/ @open-telemetry/collector-contrib-approvers @atoulme
-receiver/flinkmetricsreceiver/ @open-telemetry/collector-contrib-approvers @jonathanwamsley @djaglowski
+receiver/flinkmetricsreceiver/ @open-telemetry/collector-contrib-approvers @JonathanWamsley @djaglowski
receiver/fluentforwardreceiver/ @open-telemetry/collector-contrib-approvers @dmitryax
+receiver/gitproviderreceiver/ @open-telemetry/collector-contrib-approvers @adrielp @astencel-sumo
receiver/googlecloudpubsubreceiver/ @open-telemetry/collector-contrib-approvers @alexvanboxel
-receiver/googlecloudspannerreceiver/ @open-telemetry/collector-contrib-approvers @architjugran @varunraiko @kiranmayib
+receiver/googlecloudspannerreceiver/ @open-telemetry/collector-contrib-approvers @varunraiko
receiver/haproxyreceiver/ @open-telemetry/collector-contrib-approvers @atoulme @MovieStoreGuy
-receiver/hostmetricsreceiver/ @open-telemetry/collector-contrib-approvers @dmitryax
+receiver/hostmetricsreceiver/ @open-telemetry/collector-contrib-approvers @dmitryax @braydonk
receiver/httpcheckreceiver/ @open-telemetry/collector-contrib-approvers @codeboten
+receiver/iisreceiver/ @open-telemetry/collector-contrib-approvers @Mrod1598 @djaglowski
receiver/influxdbreceiver/ @open-telemetry/collector-contrib-approvers @jacobmarble
-receiver/iisreceiver/ @open-telemetry/collector-contrib-approvers @mrod1598 @djaglowski
-receiver/jaegerreceiver/ @open-telemetry/collector-contrib-approvers @jpkrohling
+receiver/jaegerreceiver/ @open-telemetry/collector-contrib-approvers @yurishkuro
receiver/jmxreceiver/ @open-telemetry/collector-contrib-approvers @rmfitzpatrick
receiver/journaldreceiver/ @open-telemetry/collector-contrib-approvers @sumo-drosiek @djaglowski
-receiver/k8sclusterreceiver/ @open-telemetry/collector-contrib-approvers @dmitryax
-receiver/k8seventsreceiver/ @open-telemetry/collector-contrib-approvers @dmitryax
-receiver/k8sobjectsreceiver/ @open-telemetry/collector-contrib-approvers @dmitryax @hvaghani221
+receiver/k8sclusterreceiver/ @open-telemetry/collector-contrib-approvers @dmitryax @TylerHelmuth @povilasv
+receiver/k8seventsreceiver/ @open-telemetry/collector-contrib-approvers @dmitryax @TylerHelmuth
+receiver/k8sobjectsreceiver/ @open-telemetry/collector-contrib-approvers @dmitryax @hvaghani221 @TylerHelmuth
receiver/kafkametricsreceiver/ @open-telemetry/collector-contrib-approvers @dmitryax
receiver/kafkareceiver/ @open-telemetry/collector-contrib-approvers @pavolloffay @MovieStoreGuy
-receiver/kubeletstatsreceiver/ @open-telemetry/collector-contrib-approvers @dmitryax
-receiver/lokireceiver/ @open-telemetry/collector-contrib-approvers @mar4uk @kovrus @jpkrohling
+receiver/kubeletstatsreceiver/ @open-telemetry/collector-contrib-approvers @dmitryax @TylerHelmuth
+receiver/lokireceiver/ @open-telemetry/collector-contrib-approvers @mar4uk @jpkrohling
receiver/memcachedreceiver/ @open-telemetry/collector-contrib-approvers @djaglowski
-receiver/mongodbreceiver/ @open-telemetry/collector-contrib-approvers @djaglowski @schmikei
receiver/mongodbatlasreceiver/ @open-telemetry/collector-contrib-approvers @djaglowski @schmikei
+receiver/mongodbreceiver/ @open-telemetry/collector-contrib-approvers @djaglowski @schmikei
receiver/mysqlreceiver/ @open-telemetry/collector-contrib-approvers @djaglowski
+receiver/namedpipereceiver/ @open-telemetry/collector-contrib-approvers @djaglowski
receiver/nginxreceiver/ @open-telemetry/collector-contrib-approvers @djaglowski
receiver/nsxtreceiver/ @open-telemetry/collector-contrib-approvers @dashpole @schmikei
receiver/opencensusreceiver/ @open-telemetry/collector-contrib-approvers @open-telemetry/collector-approvers
receiver/oracledbreceiver/ @open-telemetry/collector-contrib-approvers @dmitryax @crobert-1 @atoulme
-receiver/podmanreceiver/ @open-telemetry/collector-contrib-approvers @rogercoll
+receiver/osqueryreceiver/ @open-telemetry/collector-contrib-approvers @codeboten @nslaughter @smithclay
+receiver/otelarrowreceiver/ @open-telemetry/collector-contrib-approvers @jmacd @moh-osman3
receiver/otlpjsonfilereceiver/ @open-telemetry/collector-contrib-approvers @djaglowski @atoulme
+receiver/podmanreceiver/ @open-telemetry/collector-contrib-approvers @rogercoll
receiver/postgresqlreceiver/ @open-telemetry/collector-contrib-approvers @djaglowski
-receiver/prometheusexecreceiver/ @open-telemetry/collector-contrib-approvers @dmitryax
receiver/prometheusreceiver/ @open-telemetry/collector-contrib-approvers @Aneurysm9 @dashpole
-receiver/rabbitmqreceiver/ @open-telemetry/collector-contrib-approvers @djaglowski @cpheps
-receiver/pulsarreceiver/ @open-telemetry/collector-contrib-approvers @dmitryax @tjiuming
+receiver/pulsarreceiver/ @open-telemetry/collector-contrib-approvers @dmitryax @dao-jun
receiver/purefareceiver/ @open-telemetry/collector-contrib-approvers @jpkrohling @dgoscn @chrroberts-pure
receiver/purefbreceiver/ @open-telemetry/collector-contrib-approvers @jpkrohling @dgoscn @chrroberts-pure
+receiver/rabbitmqreceiver/ @open-telemetry/collector-contrib-approvers @djaglowski @cpheps
receiver/receivercreator/ @open-telemetry/collector-contrib-approvers @rmfitzpatrick
receiver/redisreceiver/ @open-telemetry/collector-contrib-approvers @dmitryax @hughesjj
receiver/riakreceiver/ @open-telemetry/collector-contrib-approvers @djaglowski @armstrmi
@@ -233,11 +256,11 @@ receiver/signalfxreceiver/ @open-telemetry/collect
receiver/simpleprometheusreceiver/ @open-telemetry/collector-contrib-approvers @fatsheep9146
receiver/skywalkingreceiver/ @open-telemetry/collector-contrib-approvers @JaredTan95
receiver/snmpreceiver/ @open-telemetry/collector-contrib-approvers @djaglowski @StefanKurek @tamir-michaeli
+receiver/snowflakereceiver/ @open-telemetry/collector-contrib-approvers @dmitryax @shalper2
receiver/solacereceiver/ @open-telemetry/collector-contrib-approvers @djaglowski @mcardy
-receiver/splunkenterprisereceiver/ @open-telemetry/collector-contrib-approvers @shalper2 @MovieStoreGuy
+receiver/splunkenterprisereceiver/ @open-telemetry/collector-contrib-approvers @shalper2 @MovieStoreGuy @greatestusername
receiver/splunkhecreceiver/ @open-telemetry/collector-contrib-approvers @atoulme
-receiver/snowflakereceiver/ @open-telemetry/collector-contrib-approvers @dmitryax @shalper2
-receiver/sqlqueryreceiver/ @open-telemetry/collector-contrib-approvers @dmitryax @pmcollins
+receiver/sqlqueryreceiver/ @open-telemetry/collector-contrib-approvers @dmitryax @crobert-1
receiver/sqlserverreceiver/ @open-telemetry/collector-contrib-approvers @djaglowski @StefanKurek
receiver/sshcheckreceiver/ @open-telemetry/collector-contrib-approvers @nslaughter @codeboten
receiver/statsdreceiver/ @open-telemetry/collector-contrib-approvers @jmacd @dmitryax
@@ -245,18 +268,33 @@ receiver/syslogreceiver/ @open-telemetry/collect
receiver/tcplogreceiver/ @open-telemetry/collector-contrib-approvers @djaglowski
receiver/udplogreceiver/ @open-telemetry/collector-contrib-approvers @djaglowski
receiver/vcenterreceiver/ @open-telemetry/collector-contrib-approvers @djaglowski @schmikei
+receiver/wavefrontreceiver/ @open-telemetry/collector-contrib-approvers @samiura
receiver/webhookeventreceiver/ @open-telemetry/collector-contrib-approvers @atoulme @shalper2
-receiver/windowseventlogreceiver/ @open-telemetry/collector-contrib-approvers @djaglowski @armstrmi
+receiver/windowseventlogreceiver/ @open-telemetry/collector-contrib-approvers @djaglowski @armstrmi @pjanotti
receiver/windowsperfcountersreceiver/ @open-telemetry/collector-contrib-approvers @dashpole
receiver/zipkinreceiver/ @open-telemetry/collector-contrib-approvers @MovieStoreGuy @astencel-sumo @crobert-1
receiver/zookeeperreceiver/ @open-telemetry/collector-contrib-approvers @djaglowski
testbed/ @open-telemetry/collector-contrib-approvers @open-telemetry/collector-approvers
-testbed/mockdatareceivers/mockawsxrayreceiver/ @open-telemetry/collector-contrib-approvers @wangzlei @srprash
testbed/mockdatasenders/mockdatadogagentexporter/ @open-telemetry/collector-contrib-approvers @boostchicken
+#####################################################
+#
+# List of distribution maintainers for OpenTelemetry Collector Contrib
+#
+#####################################################
+reports/distributions/core.yaml @open-telemetry/collector-contrib-approvers
+reports/distributions/contrib.yaml @open-telemetry/collector-contrib-approvers
+reports/distributions/aws.yaml @open-telemetry/collector-contrib-approvers
+reports/distributions/grafana.yaml @open-telemetry/collector-contrib-approvers
+reports/distributions/observiq.yaml @open-telemetry/collector-contrib-approvers
+reports/distributions/redhat.yaml @open-telemetry/collector-contrib-approvers
+reports/distributions/splunk.yaml @open-telemetry/collector-contrib-approvers @atoulme @crobert-1 @dmitryax @hughesjj @jeffreyc-splunk @jinja2 @jvoravong @panotti @rmfitzpatrick @samiura
+reports/distributions/sumo.yaml @open-telemetry/collector-contrib-approvers @aboguszewski-sumo @astencel-sumo @kkujawa-sumo @rnishtala-sumo @sumo-drosiek @swiatekm-sumo
+reports/distributions/liatrio.yaml @open-telemetry/collector-contrib-approvers @adrielp
+
## UNMAINTAINED components
-## The Github issue template generation code needs this to generate the corresponding labels.
-receiver/wavefrontreceiver/ @open-telemetry/collector-contrib-approvers
+exporter/alibabacloudlogserviceexporter/ @open-telemetry/collector-contrib-approvers
+exporter/skywalkingexporter/ @open-telemetry/collector-contrib-approvers
diff --git a/.github/ISSUE_TEMPLATE/bug_report.yaml b/.github/ISSUE_TEMPLATE/bug_report.yaml
index bbf596256228..b57c6d05dc0e 100644
--- a/.github/ISSUE_TEMPLATE/bug_report.yaml
+++ b/.github/ISSUE_TEMPLATE/bug_report.yaml
@@ -19,17 +19,23 @@ body:
# Do not manually edit it.
# Start Collector components list
- cmd/configschema
+ - cmd/githubgen
- cmd/mdatagen
- cmd/opampsupervisor
- cmd/otelcontribcol
- cmd/oteltestbedcol
- cmd/telemetrygen
- confmap/provider/s3provider
+ - confmap/provider/secretsmanagerprovider
- connector/count
+ - connector/datadog
+ - connector/exceptions
+ - connector/failover
- connector/routing
- connector/servicegraph
- connector/spanmetrics
- examples/demo
+ - exporter/alertmanager
- exporter/alibabacloudlogservice
- exporter/awscloudwatchlogs
- exporter/awsemf
@@ -51,18 +57,19 @@ body:
- exporter/googlecloud
- exporter/googlecloudpubsub
- exporter/googlemanagedprometheus
+ - exporter/honeycombmarker
- exporter/influxdb
- exporter/instana
- - exporter/jaeger
- - exporter/jaegerthrifthttp
- exporter/kafka
+ - exporter/kinetica
- exporter/loadbalancing
- exporter/logicmonitor
- exporter/logzio
- exporter/loki
- exporter/mezmo
- exporter/opencensus
- - exporter/parquet
+ - exporter/opensearch
+ - exporter/otelarrow
- exporter/prometheus
- exporter/prometheusremotewrite
- exporter/pulsar
@@ -73,14 +80,19 @@ body:
- exporter/splunkhec
- exporter/sumologic
- exporter/syslog
- - exporter/tanzuobservability
- exporter/tencentcloudlogservice
- exporter/zipkin
+ - extension/ack
- extension/asapauth
- extension/awsproxy
- extension/basicauth
- extension/bearertokenauth
- extension/encoding
+ - extension/encoding/jaegerencoding
+ - extension/encoding/jsonlogencoding
+ - extension/encoding/otlpencoding
+ - extension/encoding/textencoding
+ - extension/encoding/zipkinencoding
- extension/headerssetter
- extension/healthcheck
- extension/httpforwarder
@@ -88,56 +100,71 @@ body:
- extension/oauth2clientauth
- extension/observer
- extension/observer/dockerobserver
+ - extension/observer/ecsobserver
- extension/observer/ecstaskobserver
- extension/observer/hostobserver
- extension/observer/k8sobserver
- extension/oidcauth
+ - extension/opamp
- extension/pprof
+ - extension/remotetap
- extension/sigv4auth
+ - extension/solarwindsapmsettings
- extension/storage
- extension/storage/dbstorage
- extension/storage/filestorage
+ - extension/sumologic
- internal/aws
+ - internal/collectd
- internal/core
+ - internal/datadog
- internal/docker
+ - internal/exp/metrics
- internal/filter
- internal/k8sconfig
- internal/k8stest
+ - internal/kafka
- internal/kubelet
- internal/metadataproviders
- internal/sharedcomponent
- internal/splunk
+ - internal/sqlquery
- internal/tools
- pkg/batchperresourceattr
- pkg/batchpersignal
- pkg/experimentalmetricmetadata
+ - pkg/golden
- pkg/ottl
- pkg/pdatatest
- pkg/pdatautil
- pkg/resourcetotelemetry
+ - pkg/sampling
- pkg/stanza
+ - pkg/translator/azure
- pkg/translator/jaeger
- pkg/translator/loki
- pkg/translator/opencensus
- pkg/translator/prometheus
- pkg/translator/prometheusremotewrite
- pkg/translator/signalfx
+ - pkg/translator/skywalking
- pkg/translator/zipkin
- pkg/winperfcounters
- processor/attributes
- processor/cumulativetodelta
- - processor/datadog
+ - processor/deltatocumulative
- processor/deltatorate
- processor/filter
- processor/groupbyattrs
- processor/groupbytrace
+ - processor/interval
- processor/k8sattributes
- processor/logstransform
- processor/metricsgeneration
- processor/metricstransform
- processor/probabilisticsampler
- processor/redaction
- - processor/remoteobserver
+ - processor/remotetap
- processor/resource
- processor/resourcedetection
- processor/resourcedetection/internal/azure
@@ -145,9 +172,9 @@ body:
- processor/resourcedetection/internal/openshift
- processor/routing
- processor/schema
- - processor/servicegraph
- processor/span
- processor/spanmetrics
+ - processor/sumologic
- processor/tailsampling
- processor/transform
- receiver/activedirectoryds
@@ -174,11 +201,11 @@ body:
- receiver/dockerstats
- receiver/elasticsearch
- receiver/expvar
- - receiver/file
- receiver/filelog
- receiver/filestats
- receiver/flinkmetrics
- receiver/fluentforward
+ - receiver/gitprovider
- receiver/googlecloudpubsub
- receiver/googlecloudspanner
- receiver/haproxy
@@ -200,15 +227,17 @@ body:
- receiver/mongodb
- receiver/mongodbatlas
- receiver/mysql
+ - receiver/namedpipe
- receiver/nginx
- receiver/nsxt
- receiver/opencensus
- receiver/oracledb
+ - receiver/osquery
+ - receiver/otelarrow
- receiver/otlpjsonfile
- receiver/podman
- receiver/postgresql
- receiver/prometheus
- - receiver/prometheusexec
- receiver/pulsar
- receiver/purefa
- receiver/purefb
@@ -241,7 +270,6 @@ body:
- receiver/zipkin
- receiver/zookeeper
- testbed
- - testbed/mockdatareceivers/mockawsxrayreceiver
- testbed/mockdatasenders/mockdatadogagentexporter
# End Collector components list
- type: textarea
diff --git a/.github/ISSUE_TEMPLATE/config.yml b/.github/ISSUE_TEMPLATE/config.yml
index 239ec91e31df..58ba9d4d6384 100644
--- a/.github/ISSUE_TEMPLATE/config.yml
+++ b/.github/ISSUE_TEMPLATE/config.yml
@@ -1,5 +1,5 @@
blank_issues_enabled: false
contact_links:
- - name: OpenTelemetry Collector Slack Channel
- url: https://cloud-native.slack.com/archives/C01N6P7KR6W
+ - name: Stack Overflow
+ url: https://stackoverflow.com/questions/tagged/open-telemetry-collector
about: Please ask and answer questions here.
diff --git a/.github/ISSUE_TEMPLATE/feature_request.yaml b/.github/ISSUE_TEMPLATE/feature_request.yaml
index 68edba73476b..8c832a203dad 100644
--- a/.github/ISSUE_TEMPLATE/feature_request.yaml
+++ b/.github/ISSUE_TEMPLATE/feature_request.yaml
@@ -13,17 +13,23 @@ body:
# Do not manually edit it.
# Start Collector components list
- cmd/configschema
+ - cmd/githubgen
- cmd/mdatagen
- cmd/opampsupervisor
- cmd/otelcontribcol
- cmd/oteltestbedcol
- cmd/telemetrygen
- confmap/provider/s3provider
+ - confmap/provider/secretsmanagerprovider
- connector/count
+ - connector/datadog
+ - connector/exceptions
+ - connector/failover
- connector/routing
- connector/servicegraph
- connector/spanmetrics
- examples/demo
+ - exporter/alertmanager
- exporter/alibabacloudlogservice
- exporter/awscloudwatchlogs
- exporter/awsemf
@@ -45,18 +51,19 @@ body:
- exporter/googlecloud
- exporter/googlecloudpubsub
- exporter/googlemanagedprometheus
+ - exporter/honeycombmarker
- exporter/influxdb
- exporter/instana
- - exporter/jaeger
- - exporter/jaegerthrifthttp
- exporter/kafka
+ - exporter/kinetica
- exporter/loadbalancing
- exporter/logicmonitor
- exporter/logzio
- exporter/loki
- exporter/mezmo
- exporter/opencensus
- - exporter/parquet
+ - exporter/opensearch
+ - exporter/otelarrow
- exporter/prometheus
- exporter/prometheusremotewrite
- exporter/pulsar
@@ -67,14 +74,19 @@ body:
- exporter/splunkhec
- exporter/sumologic
- exporter/syslog
- - exporter/tanzuobservability
- exporter/tencentcloudlogservice
- exporter/zipkin
+ - extension/ack
- extension/asapauth
- extension/awsproxy
- extension/basicauth
- extension/bearertokenauth
- extension/encoding
+ - extension/encoding/jaegerencoding
+ - extension/encoding/jsonlogencoding
+ - extension/encoding/otlpencoding
+ - extension/encoding/textencoding
+ - extension/encoding/zipkinencoding
- extension/headerssetter
- extension/healthcheck
- extension/httpforwarder
@@ -82,56 +94,71 @@ body:
- extension/oauth2clientauth
- extension/observer
- extension/observer/dockerobserver
+ - extension/observer/ecsobserver
- extension/observer/ecstaskobserver
- extension/observer/hostobserver
- extension/observer/k8sobserver
- extension/oidcauth
+ - extension/opamp
- extension/pprof
+ - extension/remotetap
- extension/sigv4auth
+ - extension/solarwindsapmsettings
- extension/storage
- extension/storage/dbstorage
- extension/storage/filestorage
+ - extension/sumologic
- internal/aws
+ - internal/collectd
- internal/core
+ - internal/datadog
- internal/docker
+ - internal/exp/metrics
- internal/filter
- internal/k8sconfig
- internal/k8stest
+ - internal/kafka
- internal/kubelet
- internal/metadataproviders
- internal/sharedcomponent
- internal/splunk
+ - internal/sqlquery
- internal/tools
- pkg/batchperresourceattr
- pkg/batchpersignal
- pkg/experimentalmetricmetadata
+ - pkg/golden
- pkg/ottl
- pkg/pdatatest
- pkg/pdatautil
- pkg/resourcetotelemetry
+ - pkg/sampling
- pkg/stanza
+ - pkg/translator/azure
- pkg/translator/jaeger
- pkg/translator/loki
- pkg/translator/opencensus
- pkg/translator/prometheus
- pkg/translator/prometheusremotewrite
- pkg/translator/signalfx
+ - pkg/translator/skywalking
- pkg/translator/zipkin
- pkg/winperfcounters
- processor/attributes
- processor/cumulativetodelta
- - processor/datadog
+ - processor/deltatocumulative
- processor/deltatorate
- processor/filter
- processor/groupbyattrs
- processor/groupbytrace
+ - processor/interval
- processor/k8sattributes
- processor/logstransform
- processor/metricsgeneration
- processor/metricstransform
- processor/probabilisticsampler
- processor/redaction
- - processor/remoteobserver
+ - processor/remotetap
- processor/resource
- processor/resourcedetection
- processor/resourcedetection/internal/azure
@@ -139,9 +166,9 @@ body:
- processor/resourcedetection/internal/openshift
- processor/routing
- processor/schema
- - processor/servicegraph
- processor/span
- processor/spanmetrics
+ - processor/sumologic
- processor/tailsampling
- processor/transform
- receiver/activedirectoryds
@@ -168,11 +195,11 @@ body:
- receiver/dockerstats
- receiver/elasticsearch
- receiver/expvar
- - receiver/file
- receiver/filelog
- receiver/filestats
- receiver/flinkmetrics
- receiver/fluentforward
+ - receiver/gitprovider
- receiver/googlecloudpubsub
- receiver/googlecloudspanner
- receiver/haproxy
@@ -194,15 +221,17 @@ body:
- receiver/mongodb
- receiver/mongodbatlas
- receiver/mysql
+ - receiver/namedpipe
- receiver/nginx
- receiver/nsxt
- receiver/opencensus
- receiver/oracledb
+ - receiver/osquery
+ - receiver/otelarrow
- receiver/otlpjsonfile
- receiver/podman
- receiver/postgresql
- receiver/prometheus
- - receiver/prometheusexec
- receiver/pulsar
- receiver/purefa
- receiver/purefb
@@ -235,7 +264,6 @@ body:
- receiver/zipkin
- receiver/zookeeper
- testbed
- - testbed/mockdatareceivers/mockawsxrayreceiver
- testbed/mockdatasenders/mockdatadogagentexporter
# End Collector components list
- type: textarea
diff --git a/.github/ISSUE_TEMPLATE/new_component.yaml b/.github/ISSUE_TEMPLATE/new_component.yaml
index a08b4c5e57af..982218d463c3 100644
--- a/.github/ISSUE_TEMPLATE/new_component.yaml
+++ b/.github/ISSUE_TEMPLATE/new_component.yaml
@@ -1,7 +1,7 @@
name: New component proposal
description: Suggest a new component for the project
title: "New component: "
-labels: ["new component", "needs triage"]
+labels: ["Sponsor Needed", "needs triage"]
body:
- type: textarea
attributes:
@@ -27,11 +27,21 @@ body:
description: A vendor-specific component directly interfaces with a vendor-specific API and is expected to be maintained by a representative of the same vendor.
options:
- label: This is a vendor-specific component
- - label: If this is a vendor-specific component, I am proposing to contribute this as a representative of the vendor.
+ - label: If this is a vendor-specific component, I am proposing to contribute and support it as a representative of the vendor.
+ - type: input
+ attributes:
+ label: Code Owner(s)
+ description: A code owner is responsible for supporting the component, including triaging issues, reviewing PRs, and submitting bug fixes.
+ Please list one or more members or aspiring members of the OpenTelemetry project who will serve as code owners.
+ For vendor-specific components, the code owner is required and must be a representative of the vendor.
+ For non-vendor components, having a code owner is strongly recommended. However, you may use the issue to try to find a code owner for your component.
- type: input
attributes:
label: Sponsor (optional)
- description: "A sponsor is an approver who will be in charge of being the official reviewer of the code. For vendor-specific components, it's good to have a volunteer sponsor. If you can't find one, we'll assign one in a round-robin fashion. For non-vendor components, having a sponsor means that your use-case has been validated. If there are no sponsors yet for the component, it's fine: use the issue as a means to try to find a sponsor for your component."
+ description: "A sponsor is an approver who will be in charge of being the official reviewer of the code.
+ For vendor-specific components, it's good to have a volunteer sponsor. If you can't find one, we'll assign one in a round-robin fashion.
+ For non-vendor components, having a sponsor means that your use-case has been validated.
+ If there are no sponsors yet for the component, it's fine: use the issue as a means to try to find a sponsor for your component."
- type: textarea
attributes:
label: Additional context
diff --git a/.github/ISSUE_TEMPLATE/other.yaml b/.github/ISSUE_TEMPLATE/other.yaml
index 6baf0637d164..87e6d43a704b 100644
--- a/.github/ISSUE_TEMPLATE/other.yaml
+++ b/.github/ISSUE_TEMPLATE/other.yaml
@@ -13,17 +13,23 @@ body:
# Do not manually edit it.
# Start Collector components list
- cmd/configschema
+ - cmd/githubgen
- cmd/mdatagen
- cmd/opampsupervisor
- cmd/otelcontribcol
- cmd/oteltestbedcol
- cmd/telemetrygen
- confmap/provider/s3provider
+ - confmap/provider/secretsmanagerprovider
- connector/count
+ - connector/datadog
+ - connector/exceptions
+ - connector/failover
- connector/routing
- connector/servicegraph
- connector/spanmetrics
- examples/demo
+ - exporter/alertmanager
- exporter/alibabacloudlogservice
- exporter/awscloudwatchlogs
- exporter/awsemf
@@ -45,18 +51,19 @@ body:
- exporter/googlecloud
- exporter/googlecloudpubsub
- exporter/googlemanagedprometheus
+ - exporter/honeycombmarker
- exporter/influxdb
- exporter/instana
- - exporter/jaeger
- - exporter/jaegerthrifthttp
- exporter/kafka
+ - exporter/kinetica
- exporter/loadbalancing
- exporter/logicmonitor
- exporter/logzio
- exporter/loki
- exporter/mezmo
- exporter/opencensus
- - exporter/parquet
+ - exporter/opensearch
+ - exporter/otelarrow
- exporter/prometheus
- exporter/prometheusremotewrite
- exporter/pulsar
@@ -67,14 +74,19 @@ body:
- exporter/splunkhec
- exporter/sumologic
- exporter/syslog
- - exporter/tanzuobservability
- exporter/tencentcloudlogservice
- exporter/zipkin
+ - extension/ack
- extension/asapauth
- extension/awsproxy
- extension/basicauth
- extension/bearertokenauth
- extension/encoding
+ - extension/encoding/jaegerencoding
+ - extension/encoding/jsonlogencoding
+ - extension/encoding/otlpencoding
+ - extension/encoding/textencoding
+ - extension/encoding/zipkinencoding
- extension/headerssetter
- extension/healthcheck
- extension/httpforwarder
@@ -82,56 +94,71 @@ body:
- extension/oauth2clientauth
- extension/observer
- extension/observer/dockerobserver
+ - extension/observer/ecsobserver
- extension/observer/ecstaskobserver
- extension/observer/hostobserver
- extension/observer/k8sobserver
- extension/oidcauth
+ - extension/opamp
- extension/pprof
+ - extension/remotetap
- extension/sigv4auth
+ - extension/solarwindsapmsettings
- extension/storage
- extension/storage/dbstorage
- extension/storage/filestorage
+ - extension/sumologic
- internal/aws
+ - internal/collectd
- internal/core
+ - internal/datadog
- internal/docker
+ - internal/exp/metrics
- internal/filter
- internal/k8sconfig
- internal/k8stest
+ - internal/kafka
- internal/kubelet
- internal/metadataproviders
- internal/sharedcomponent
- internal/splunk
+ - internal/sqlquery
- internal/tools
- pkg/batchperresourceattr
- pkg/batchpersignal
- pkg/experimentalmetricmetadata
+ - pkg/golden
- pkg/ottl
- pkg/pdatatest
- pkg/pdatautil
- pkg/resourcetotelemetry
+ - pkg/sampling
- pkg/stanza
+ - pkg/translator/azure
- pkg/translator/jaeger
- pkg/translator/loki
- pkg/translator/opencensus
- pkg/translator/prometheus
- pkg/translator/prometheusremotewrite
- pkg/translator/signalfx
+ - pkg/translator/skywalking
- pkg/translator/zipkin
- pkg/winperfcounters
- processor/attributes
- processor/cumulativetodelta
- - processor/datadog
+ - processor/deltatocumulative
- processor/deltatorate
- processor/filter
- processor/groupbyattrs
- processor/groupbytrace
+ - processor/interval
- processor/k8sattributes
- processor/logstransform
- processor/metricsgeneration
- processor/metricstransform
- processor/probabilisticsampler
- processor/redaction
- - processor/remoteobserver
+ - processor/remotetap
- processor/resource
- processor/resourcedetection
- processor/resourcedetection/internal/azure
@@ -139,9 +166,9 @@ body:
- processor/resourcedetection/internal/openshift
- processor/routing
- processor/schema
- - processor/servicegraph
- processor/span
- processor/spanmetrics
+ - processor/sumologic
- processor/tailsampling
- processor/transform
- receiver/activedirectoryds
@@ -168,11 +195,11 @@ body:
- receiver/dockerstats
- receiver/elasticsearch
- receiver/expvar
- - receiver/file
- receiver/filelog
- receiver/filestats
- receiver/flinkmetrics
- receiver/fluentforward
+ - receiver/gitprovider
- receiver/googlecloudpubsub
- receiver/googlecloudspanner
- receiver/haproxy
@@ -194,15 +221,17 @@ body:
- receiver/mongodb
- receiver/mongodbatlas
- receiver/mysql
+ - receiver/namedpipe
- receiver/nginx
- receiver/nsxt
- receiver/opencensus
- receiver/oracledb
+ - receiver/osquery
+ - receiver/otelarrow
- receiver/otlpjsonfile
- receiver/podman
- receiver/postgresql
- receiver/prometheus
- - receiver/prometheusexec
- receiver/pulsar
- receiver/purefa
- receiver/purefb
@@ -235,7 +264,6 @@ body:
- receiver/zipkin
- receiver/zookeeper
- testbed
- - testbed/mockdatareceivers/mockawsxrayreceiver
- testbed/mockdatasenders/mockdatadogagentexporter
# End Collector components list
- type: textarea
diff --git a/.github/auto_assign.yml b/.github/auto_assign.yml
index 4fbf22a1ee9e..939291a8b34e 100644
--- a/.github/auto_assign.yml
+++ b/.github/auto_assign.yml
@@ -12,10 +12,12 @@ assigneeGroups:
approvers_maintainers:
# Approvers
- Aneurysm9
+ - astencel-sumo
- atoulme
+ - bryan-aguilar
- dashpole
+ - songy23
- fatsheep9146
- - kovrus
# Maintainers
- bogdandrutu
- codeboten
diff --git a/.github/dependabot.yml b/.github/dependabot.yml
deleted file mode 100644
index 36e98eeb81f2..000000000000
--- a/.github/dependabot.yml
+++ /dev/null
@@ -1,1104 +0,0 @@
-# File generated by "make gendependabot"; DO NOT EDIT.
-
-version: 2
-updates:
- - package-ecosystem: "gomod"
- directory: "/"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/cmd/configschema"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/cmd/mdatagen"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/cmd/opampsupervisor"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/cmd/otelcontribcol"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/cmd/oteltestbedcol"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/cmd/telemetrygen"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/confmap/provider/s3provider"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/connector/countconnector"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/connector/routingconnector"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/connector/servicegraphconnector"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/connector/spanmetricsconnector"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/examples/demo/client"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/examples/demo/server"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/alibabacloudlogserviceexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/awscloudwatchlogsexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/awsemfexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/awskinesisexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/awss3exporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/awsxrayexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/azuredataexplorerexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/azuremonitorexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/carbonexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/cassandraexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/clickhouseexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/coralogixexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/datadogexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/datasetexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/dynatraceexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/elasticsearchexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/f5cloudexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/fileexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/googlecloudexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/googlecloudpubsubexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/googlemanagedprometheusexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/influxdbexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/instanaexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/jaegerexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/jaegerthrifthttpexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/kafkaexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/loadbalancingexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/logicmonitorexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/logzioexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/lokiexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/mezmoexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/opencensusexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/parquetexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/prometheusexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/prometheusremotewriteexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/pulsarexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/sapmexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/sentryexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/signalfxexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/skywalkingexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/splunkhecexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/sumologicexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/syslogexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/tanzuobservabilityexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/tencentcloudlogserviceexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/exporter/zipkinexporter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/extension/asapauthextension"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/extension/awsproxy"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/extension/basicauthextension"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/extension/bearertokenauthextension"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/extension/encodingextension"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/extension/headerssetterextension"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/extension/healthcheckextension"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/extension/httpforwarder"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/extension/jaegerremotesampling"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/extension/oauth2clientauthextension"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/extension/observer"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/extension/observer/dockerobserver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/extension/observer/ecsobserver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/extension/observer/ecstaskobserver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/extension/observer/hostobserver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/extension/observer/k8sobserver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/extension/oidcauthextension"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/extension/pprofextension"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/extension/sigv4authextension"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/extension/storage"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/internal/aws/awsutil"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/internal/aws/containerinsight"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/internal/aws/cwlogs"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/internal/aws/ecsutil"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/internal/aws/k8s"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/internal/aws/metrics"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/internal/aws/proxy"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/internal/aws/xray"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/internal/aws/xray/testdata/sampleapp"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/internal/aws/xray/testdata/sampleserver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/internal/common"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/internal/coreinternal"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/internal/docker"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/internal/filter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/internal/k8sconfig"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/internal/k8stest"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/internal/kubelet"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/internal/metadataproviders"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/internal/sharedcomponent"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/internal/splunk"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/internal/tools"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/pkg/batchperresourceattr"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/pkg/batchpersignal"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/pkg/experimentalmetricmetadata"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/pkg/ottl"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/pkg/pdatatest"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/pkg/pdatautil"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/pkg/resourcetotelemetry"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/pkg/stanza"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/pkg/translator/jaeger"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/pkg/translator/loki"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/pkg/translator/opencensus"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/pkg/translator/prometheus"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/pkg/translator/prometheusremotewrite"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/pkg/translator/signalfx"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/pkg/translator/zipkin"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/pkg/winperfcounters"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/processor/attributesprocessor"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/processor/cumulativetodeltaprocessor"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/processor/datadogprocessor"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/processor/deltatorateprocessor"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/processor/filterprocessor"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/processor/groupbyattrsprocessor"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/processor/groupbytraceprocessor"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/processor/k8sattributesprocessor"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/processor/logstransformprocessor"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/processor/metricsgenerationprocessor"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/processor/metricstransformprocessor"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/processor/probabilisticsamplerprocessor"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/processor/redactionprocessor"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/processor/remoteobserverprocessor"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/processor/resourcedetectionprocessor"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/processor/resourceprocessor"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/processor/routingprocessor"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/processor/schemaprocessor"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/processor/servicegraphprocessor"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/processor/spanmetricsprocessor"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/processor/spanprocessor"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/processor/tailsamplingprocessor"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/processor/transformprocessor"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/activedirectorydsreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/aerospikereceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/apachereceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/apachesparkreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/awscloudwatchmetricsreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/awscloudwatchreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/awscontainerinsightreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/awsecscontainermetricsreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/awsfirehosereceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/awsxrayreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/azureblobreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/azureeventhubreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/azuremonitorreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/bigipreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/carbonreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/chronyreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/cloudflarereceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/cloudfoundryreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/collectdreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/couchdbreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/datadogreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/dockerstatsreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/elasticsearchreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/expvarreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/filelogreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/filereceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/filestatsreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/flinkmetricsreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/fluentforwardreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/googlecloudpubsubreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/googlecloudspannerreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/haproxyreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/hostmetricsreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/httpcheckreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/iisreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/influxdbreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/jaegerreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/jmxreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/journaldreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/k8sclusterreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/k8seventsreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/k8sobjectsreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/kafkametricsreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/kafkareceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/kubeletstatsreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/lokireceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/memcachedreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/mongodbatlasreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/mongodbreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/mysqlreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/nginxreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/nsxtreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/opencensusreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/oracledbreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/otlpjsonfilereceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/podmanreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/postgresqlreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/prometheusexecreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/prometheusreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/pulsarreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/purefareceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/purefbreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/rabbitmqreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/receivercreator"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/redisreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/riakreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/saphanareceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/sapmreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/signalfxreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/simpleprometheusreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/simpleprometheusreceiver/examples/federation/prom-counter"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/skywalkingreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/snmpreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/snowflakereceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/solacereceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/splunkenterprisereceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/splunkhecreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/sqlqueryreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/sqlserverreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
- - package-ecosystem: "gomod"
- directory: "/receiver/sshcheckreceiver"
- schedule:
- interval: "weekly"
- day: "wednesday"
diff --git a/.github/workflows/add-codeowners-to-pr.yml b/.github/workflows/add-codeowners-to-pr.yml
index 50a3208b495f..12d3fced3674 100644
--- a/.github/workflows/add-codeowners-to-pr.yml
+++ b/.github/workflows/add-codeowners-to-pr.yml
@@ -10,7 +10,7 @@ jobs:
runs-on: ubuntu-latest
if: ${{ github.actor != 'dependabot[bot]' && github.repository_owner == 'open-telemetry' }}
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
- name: Run add-codeowners-to-pr.sh
run: ./.github/workflows/scripts/add-codeowners-to-pr.sh
diff --git a/.github/workflows/add-labels.yml b/.github/workflows/add-labels.yml
index f5597c20d46d..f9742654b507 100644
--- a/.github/workflows/add-labels.yml
+++ b/.github/workflows/add-labels.yml
@@ -9,7 +9,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
- name: Run update permissions
run: chmod +x ./.github/workflows/scripts/add-labels.sh
diff --git a/.github/workflows/auto-assign-owners.yml b/.github/workflows/auto-assign-owners.yml
index df6d2207d87b..538bc31bdb0b 100644
--- a/.github/workflows/auto-assign-owners.yml
+++ b/.github/workflows/auto-assign-owners.yml
@@ -13,7 +13,7 @@ jobs:
if: ${{ github.actor != 'dependabot[bot]' }}
steps:
- name: run
- uses: kentaro-m/auto-assign-action@v1.2.5
+ uses: kentaro-m/auto-assign-action@v1.2.6
with:
configuration-path: ".github/auto_assign.yml"
repo-token: '${{ secrets.GITHUB_TOKEN }}'
diff --git a/.github/workflows/auto-update-jmx-metrics-component.yml b/.github/workflows/auto-update-jmx-metrics-component.yml
index 357898e9502d..8c16408f9778 100644
--- a/.github/workflows/auto-update-jmx-metrics-component.yml
+++ b/.github/workflows/auto-update-jmx-metrics-component.yml
@@ -14,7 +14,7 @@ jobs:
already-added: ${{ steps.check-versions.outputs.already-added }}
already-opened: ${{ steps.check-versions.outputs.already-opened }}
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
- id: check-versions
name: Check versions
@@ -55,7 +55,7 @@ jobs:
needs:
- check-versions
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
- name: Update version
env:
diff --git a/.github/workflows/build-and-test-windows.yml b/.github/workflows/build-and-test-windows.yml
index 50e8ee8f40cd..101e76f97fd3 100644
--- a/.github/workflows/build-and-test-windows.yml
+++ b/.github/workflows/build-and-test-windows.yml
@@ -6,15 +6,16 @@ on:
- 'releases/**'
tags:
- 'v[0-9]+.[0-9]+.[0-9]+*'
+ merge_group:
pull_request:
types: [opened, synchronize, reopened, labeled, unlabeled]
branches:
- main
env:
TEST_RESULTS: testbed/tests/results/junit/results.xml
- # See: https://github.com/actions/cache/issues/810#issuecomment-1222550359
- # Cache downloads for this workflow consistently run in under 10 minutes
- SEGMENT_DOWNLOAD_TIMEOUT_MINS: 15
+ # Make sure to exit early if cache segment download times out after 2 minutes.
+ # We limit cache download as a whole to 5 minutes.
+ SEGMENT_DOWNLOAD_TIMEOUT_MINS: 2
# Do not cancel this workflow on main
concurrency:
@@ -33,30 +34,40 @@ jobs:
- extension
- internal
- pkg
+ - cmd
- other
runs-on: windows-latest
- if: ${{ github.actor != 'dependabot[bot]' && (contains(github.event.pull_request.labels.*.name, 'Run Windows') || github.event_name == 'push') }}
+ if: ${{ github.actor != 'dependabot[bot]' && (contains(github.event.pull_request.labels.*.name, 'Run Windows') || github.event_name == 'push' || github.event_name == 'merge_group') }}
+ env:
+ # Limit memory usage via GC environment variables to avoid OOM on GH runners, especially for `cmd/otelcontribcol`,
+ # see https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/28682#issuecomment-1802296776
+ GOGC: 50
+ GOMEMLIMIT: 2GiB
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
- if: matrix.group == 'receiver-0'
name: install IIS
run: Install-WindowsFeature -name Web-Server -IncludeManagementTools
- - uses: actions/setup-go@v4
+ - uses: actions/setup-go@v5
with:
- go-version: ~1.19.10
+ go-version: "1.21.7"
cache: false
- name: Cache Go
id: go-mod-cache
+ timeout-minutes: 25
uses: actions/cache@v3
with:
path: |
~\go\pkg\mod
~\AppData\Local\go-build
key: go-build-cache-${{ runner.os }}-${{ matrix.group }}-go-${{ hashFiles('**/go.sum') }}
+ - if: matrix.group == 'cmd'
+ name: Increasing GOTEST_TIMEOUT for group 'cmd'
+ run: echo "GOTEST_TIMEOUT=1200s" >> $Env:GITHUB_ENV
- name: Run Unit tests
run: make -j2 gotest GROUP=${{ matrix.group }}
windows-unittest:
- if: ${{ github.actor != 'dependabot[bot]' && (contains(github.event.pull_request.labels.*.name, 'Run Windows') || github.event_name == 'push') }}
+ if: ${{ github.actor != 'dependabot[bot]' && (contains(github.event.pull_request.labels.*.name, 'Run Windows') || github.event_name == 'push' || github.event_name == 'merge_group') }}
runs-on: windows-latest
needs: [windows-unittest-matrix]
steps:
diff --git a/.github/workflows/build-and-test.yml b/.github/workflows/build-and-test.yml
index 2e6cdfd9e7c8..284136b921e0 100644
--- a/.github/workflows/build-and-test.yml
+++ b/.github/workflows/build-and-test.yml
@@ -4,12 +4,13 @@ on:
branches: [ main ]
tags:
- 'v[0-9]+.[0-9]+.[0-9]+*'
+ merge_group:
pull_request:
env:
TEST_RESULTS: testbed/tests/results/junit/results.xml
- # See: https://github.com/actions/cache/issues/810#issuecomment-1222550359
- # Cache downloads for this workflow consistently run in under 1 minute
- SEGMENT_DOWNLOAD_TIMEOUT_MINS: 5
+ # Make sure to exit early if cache segment download times out after 2 minutes.
+ # We limit cache download as a whole to 5 minutes.
+ SEGMENT_DOWNLOAD_TIMEOUT_MINS: 2
# Do not cancel this workflow on main. See https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/16616
concurrency:
@@ -22,13 +23,14 @@ jobs:
runs-on: ubuntu-latest
if: ${{ github.actor != 'dependabot[bot]' }}
steps:
- - uses: actions/checkout@v3
- - uses: actions/setup-go@v4
+ - uses: actions/checkout@v4
+ - uses: actions/setup-go@v5
with:
- go-version: 1.19
+ go-version: "1.21.7"
cache: false
- name: Cache Go
id: go-cache
+ timeout-minutes: 5
uses: actions/cache@v3
with:
path: |
@@ -45,14 +47,14 @@ jobs:
runs-on: ubuntu-latest
needs: [setup-environment]
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
- name: Check Collector Module Version
run: ./.github/workflows/scripts/check-collector-module-version.sh
check-codeowners:
runs-on: ubuntu-latest
needs: [setup-environment]
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
- name: Check Code Owner Existence
run: ./.github/workflows/scripts/check-codeowners.sh check_code_owner_existence
- name: Check Component Existence
@@ -62,26 +64,35 @@ jobs:
lint-matrix:
strategy:
matrix:
+ test:
+ - windows
+ - linux
group:
- receiver-0
- receiver-1
+ - receiver-2
+ - receiver-3
- processor
- - exporter
+ - exporter-0
+ - exporter-1
- extension
- connector
- internal
- pkg
+ - cmd-0
+ - cmd-1
- other
runs-on: ubuntu-latest
needs: [setup-environment]
steps:
- - uses: actions/checkout@v3
- - uses: actions/setup-go@v4
+ - uses: actions/checkout@v4
+ - uses: actions/setup-go@v5
with:
- go-version: "1.20"
+ go-version: "1.21.7"
cache: false
- name: Cache Go
id: go-cache
+ timeout-minutes: 5
uses: actions/cache@v3
with:
path: |
@@ -100,7 +111,7 @@ jobs:
path: ~/.cache/go-build
key: go-lint-build-${{ matrix.group }}-${{ runner.os }}-${{ hashFiles('**/go.sum') }}
- name: Lint
- run: make -j2 golint GROUP=${{ matrix.group }}
+ run: GOOS=${{ matrix.os }} GOARCH=amd64 make -j2 golint GROUP=${{ matrix.group }}
lint:
if: ${{ github.actor != 'dependabot[bot]' && always() }}
runs-on: ubuntu-latest
@@ -124,24 +135,30 @@ jobs:
group:
- receiver-0
- receiver-1
+ - receiver-2
+ - receiver-3
- processor
- - exporter
+ - exporter-0
+ - exporter-1
- extension
- connector
- internal
- pkg
+ - cmd-0
+ - cmd-1
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- name: Checkout Repo
- uses: actions/checkout@v3
+ uses: actions/checkout@v4
- name: Setup Go
- uses: actions/setup-go@v4
+ uses: actions/setup-go@v5
with:
- go-version: ~1.19.10
+ go-version: "1.21.7"
cache: false
- name: Cache Go
id: go-cache
+ timeout-minutes: 5
uses: actions/cache@v3
with:
path: |
@@ -157,13 +174,14 @@ jobs:
runs-on: ubuntu-latest
needs: [setup-environment]
steps:
- - uses: actions/checkout@v3
- - uses: actions/setup-go@v4
+ - uses: actions/checkout@v4
+ - uses: actions/setup-go@v5
with:
- go-version: 1.19
+ go-version: "1.21.7"
cache: false
- name: Cache Go
id: go-cache
+ timeout-minutes: 5
uses: actions/cache@v3
with:
path: |
@@ -178,6 +196,10 @@ jobs:
run: make install-tools
- name: CheckDoc
run: make checkdoc
+ - name: CheckMetadata
+ run: make checkmetadata
+ - name: CheckApi
+ run: make checkapi
- name: Porto
run: |
make -j2 goporto
@@ -198,14 +220,14 @@ jobs:
run: |
make genoteltestbedcol
git diff -s --exit-code || (echo 'Generated code is out of date, please run "make genoteltestbedcol" and commit the changes in this PR.' && exit 1)
+ - name: Gen distributions
+ run: |
+ make gendistributions
+ git diff -s --exit-code || (echo 'Generated code is out of date, please run "make gendistributions" and commit the changes in this PR.' && exit 1)
- name: CodeGen
run: |
make -j2 generate
git diff --exit-code ':!*go.sum' || (echo 'Generated code is out of date, please run "make generate" and commit the changes in this PR.' && exit 1)
- - name: Check gendependabot
- run: |
- make -j2 gendependabot
- git diff --exit-code ':!*go.sum' || (echo 'dependabot.yml is out of date, please run "make gendependabot" and commit the changes in this PR.' && exit 1)
- name: MultimodVerify
run: make multimod-verify
- name: Components dropdown in issue templates
@@ -215,27 +237,33 @@ jobs:
unittest-matrix:
strategy:
matrix:
- go-version: ["1.20", 1.19] # 1.20 is interpreted as 1.2 without quotes
+ go-version: ["1.22.0", "1.21.7"] # 1.20 is interpreted as 1.2 without quotes
group:
- receiver-0
- receiver-1
+ - receiver-2
+ - receiver-3
- processor
- - exporter
+ - exporter-0
+ - exporter-1
- extension
- connector
- internal
- pkg
+ - cmd-0
+ - cmd-1
- other
runs-on: ubuntu-latest
needs: [setup-environment]
steps:
- - uses: actions/checkout@v3
- - uses: actions/setup-go@v4
+ - uses: actions/checkout@v4
+ - uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
cache: false
- name: Cache Go
id: go-cache
+ timeout-minutes: 5
uses: actions/cache@v3
with:
path: |
@@ -254,21 +282,18 @@ jobs:
path: ~/.cache/go-build
key: go-test-build-${{ runner.os }}-${{ matrix.go-version }}-${{ hashFiles('**/go.sum') }}
- name: Run Unit Tests
- if: ${{ matrix.go-version == '1.19' }}
+ if: startsWith( matrix.go-version, '1.21' ) != true
run: make gotest GROUP=${{ matrix.group }}
- name: Run Unit Tests With Coverage
- if: ${{ matrix.go-version == '1.20' }} # only run coverage on one version
+ if: startsWith( matrix.go-version, '1.21' ) # only run coverage on one version
run: make gotest-with-cover GROUP=${{ matrix.group }}
- - uses: actions/upload-artifact@v3
- if: ${{ matrix.go-version == '1.20' }} # only run coverage on one version
+ - uses: actions/upload-artifact@v4
+ if: startsWith( matrix.go-version, '1.21' ) # only upload artifact for one version
with:
- name: coverage-artifacts
+ name: coverage-artifacts-${{ matrix.go-version }}-${{ matrix.group }}
path: ${{ matrix.group }}-coverage.txt
unittest:
if: ${{ github.actor != 'dependabot[bot]' && always() }}
- strategy:
- matrix:
- go-version: ["1.20", 1.19] # 1.20 is interpreted as 1.2 without quotes
runs-on: ubuntu-latest
needs: [setup-environment, unittest-matrix]
steps:
@@ -287,12 +312,13 @@ jobs:
runs-on: ubuntu-latest
needs: [unittest]
steps:
- - uses: actions/checkout@v3
- - uses: actions/download-artifact@v3
+ - uses: actions/checkout@v4
+ - uses: actions/download-artifact@v4
with:
- name: coverage-artifacts
+ merge-multiple: true
+ pattern: coverage-artifacts-*
- name: Upload coverage report
- uses: Wandalen/wretry.action@v1.3.0
+ uses: Wandalen/wretry.action@v1.4.4
with:
action: codecov/codecov-action@v3
with: |
@@ -301,17 +327,34 @@ jobs:
attempt_limit: 10
attempt_delay: 15000
- integration-tests:
+ integration-tests-matrix:
+ strategy:
+ matrix:
+ group:
+ - receiver-0
+ - receiver-1
+ - receiver-2
+ - receiver-3
+ - processor
+ - exporter-0
+ - exporter-1
+ - extension
+ - connector
+ - internal
+ - pkg
+ - cmd-0
+ - cmd-1
runs-on: ubuntu-latest
needs: [setup-environment]
steps:
- - uses: actions/checkout@v3
- - uses: actions/setup-go@v4
+ - uses: actions/checkout@v4
+ - uses: actions/setup-go@v5
with:
- go-version: 1.19
+ go-version: "1.21.7"
cache: false
- name: Cache Go
id: go-cache
+ timeout-minutes: 5
uses: actions/cache@v3
with:
path: |
@@ -319,19 +362,37 @@ jobs:
~/go/pkg/mod
key: go-cache-${{ runner.os }}-${{ hashFiles('**/go.sum') }}
- name: Run Integration Tests
- run: make integration-test
+ run: make gointegration-test GROUP=${{ matrix.group }}
+
+ integration-tests:
+ if: ${{ github.actor != 'dependabot[bot]' && always() }}
+ runs-on: ubuntu-latest
+ needs: [ setup-environment, integration-tests-matrix ]
+ steps:
+ - name: Print result
+ run: echo ${{ needs.integration-tests-matrix.result }}
+ - name: Interpret result
+ run: |
+ if [[ success == ${{ needs.integration-tests-matrix.result }} ]]
+ then
+ echo "All matrix jobs passed!"
+ else
+ echo "One or more matrix jobs failed."
+ false
+ fi
correctness-traces:
runs-on: ubuntu-latest
needs: [setup-environment]
steps:
- - uses: actions/checkout@v3
- - uses: actions/setup-go@v4
+ - uses: actions/checkout@v4
+ - uses: actions/setup-go@v5
with:
- go-version: 1.19
+ go-version: "1.21.7"
cache: false
- name: Cache Go
id: go-cache
+ timeout-minutes: 5
uses: actions/cache@v3
with:
path: |
@@ -350,13 +411,14 @@ jobs:
runs-on: ubuntu-latest
needs: [setup-environment]
steps:
- - uses: actions/checkout@v3
- - uses: actions/setup-go@v4
+ - uses: actions/checkout@v4
+ - uses: actions/setup-go@v5
with:
- go-version: 1.19
+ go-version: "1.21.7"
cache: false
- name: Cache Go
id: go-cache
+ timeout-minutes: 5
uses: actions/cache@v3
with:
path: |
@@ -376,13 +438,13 @@ jobs:
runs-on: ubuntu-latest
needs: [setup-environment]
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
- name: Build Examples
run: make build-examples
cross-compile:
runs-on: ubuntu-latest
- needs: [unittest, integration-tests, lint]
+ needs: [setup-environment]
strategy:
matrix:
os:
@@ -395,6 +457,7 @@ jobs:
- arm
- arm64
- ppc64le
+ - s390x
include:
- os: linux
- arch: arm
@@ -406,20 +469,25 @@ jobs:
arch: arm
- os: darwin
arch: ppc64le
+ - os: darwin
+ arch: s390x
- os: windows
arch: arm
- os: windows
arch: arm64
- os: windows
arch: ppc64le
+ - os: windows
+ arch: s390x
steps:
- - uses: actions/checkout@v3
- - uses: actions/setup-go@v4
+ - uses: actions/checkout@v4
+ - uses: actions/setup-go@v5
with:
- go-version: 1.19
+ go-version: "1.21.7"
cache: false
- name: Cache Go
id: go-cache
+ timeout-minutes: 5
uses: actions/cache@v3
with:
path: |
@@ -435,34 +503,34 @@ jobs:
- name: Build Collector ${{ matrix.binary }}
run: make GOOS=${{ matrix.os }} GOARCH=${{ matrix.arch }} GOARM=${{ matrix.arm }} otelcontribcol
- name: Upload Collector Binaries
- uses: actions/upload-artifact@v3
+ uses: actions/upload-artifact@v4
with:
- name: collector-binaries
+ name: collector-binaries-${{ matrix.os }}-${{ matrix.arch }}
path: ./bin/*
build-package:
- # Use 20.04.5 until https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/16450 is resolved
- runs-on: ubuntu-20.04
+ runs-on: ubuntu-latest
needs: [cross-compile]
strategy:
fail-fast: false
matrix:
package_type: ["deb", "rpm"]
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Install Ruby
uses: ruby/setup-ruby@v1
with:
- ruby-version: '2.6'
+ ruby-version: '3.3'
- name: Install fpm
- run: gem install --no-document fpm -v 1.11.0
+ run: gem install --no-document fpm -v 1.15.1
- name: Download Collector Binaries
- uses: actions/download-artifact@v3
+ uses: actions/download-artifact@v4
with:
- name: collector-binaries
+ merge-multiple: true
path: bin/
+ pattern: collector-binaries-*
- run: chmod +x bin/*
- name: Set Release Tag
id: github_tag
@@ -473,6 +541,8 @@ jobs:
run: ./internal/buildscripts/packaging/fpm/${{ matrix.package_type }}/build.sh "${{ steps.github_tag.outputs.tag }}" "arm64" "./dist/"
- name: Build ${{ matrix.package_type }} ppc64le package
run: ./internal/buildscripts/packaging/fpm/${{ matrix.package_type }}/build.sh "${{ steps.github_tag.outputs.tag }}" "ppc64le" "./dist/"
+ - name: Build ${{ matrix.package_type }} s390x package
+ run: ./internal/buildscripts/packaging/fpm/${{ matrix.package_type }}/build.sh "${{ steps.github_tag.outputs.tag }}" "s390x" "./dist/"
- name: Test ${{ matrix.package_type }} package
run: |
if [[ "${{ matrix.package_type }}" = "deb" ]]; then
@@ -481,23 +551,24 @@ jobs:
./internal/buildscripts/packaging/fpm/test.sh dist/otel-contrib-collector*x86_64.rpm examples/demo/otel-collector-config.yaml
fi
- name: Upload Packages
- uses: actions/upload-artifact@v3
+ uses: actions/upload-artifact@v4
with:
- name: collector-packages
+ name: collector-packages-${{ matrix.package_type }}
path: ./dist/*
windows-msi:
if: false # skip. See https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/10113
runs-on: windows-latest
needs: [cross-compile]
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Download Binaries
- uses: actions/download-artifact@v3
+ uses: actions/download-artifact@v4
with:
- name: collector-binaries
+ merge-multiple: true
path: ./bin/
+ pattern: collector-binaries-*
- name: Cache Wix
id: wix-cache
uses: actions/cache@v3
@@ -516,26 +587,28 @@ jobs:
- name: Validate MSI
run: .\internal\buildscripts\packaging\msi\make.ps1 Confirm-MSI
- name: Upload MSI
- uses: actions/upload-artifact@v3
+ uses: actions/upload-artifact@v4
with:
- name: collector-packages
+ name: collector-packages-msi
path: ./dist/*.msi
publish-check:
runs-on: ubuntu-latest
- needs: [build-package]
+ needs: [lint, unittest, integration-tests, build-package]
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
- name: Download Binaries
- uses: actions/download-artifact@v3
+ uses: actions/download-artifact@v4
with:
- name: collector-binaries
+ merge-multiple: true
path: ./bin/
+ pattern: collector-binaries-*
- name: Download Packages
- uses: actions/download-artifact@v3
+ uses: actions/download-artifact@v4
with:
- name: collector-packages
+ merge-multiple: true
path: ./dist/
+ pattern: collector-packages-*
- name: Verify Distribution Files Exist
id: check
run: ./.github/workflows/scripts/verify-dist-files-exist.sh
@@ -544,16 +617,17 @@ jobs:
needs: [lint, unittest, integration-tests, build-package]
if: (github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/v')) && github.repository == 'open-telemetry/opentelemetry-collector-contrib'
steps:
- - uses: actions/checkout@v3
- - uses: actions/setup-go@v4
+ - uses: actions/checkout@v4
+ - uses: actions/setup-go@v5
with:
- go-version: 1.19
+ go-version: "1.21.7"
cache: false
- name: Mkdir bin and dist
run: |
mkdir bin/ dist/
- name: Cache Go
id: go-cache
+ timeout-minutes: 5
uses: actions/cache@v3
with:
path: |
@@ -567,16 +641,18 @@ jobs:
if: steps.go-cache.outputs.cache-hit != 'true'
run: make install-tools
- name: Download Binaries
- uses: actions/download-artifact@v3
+ uses: actions/download-artifact@v4
with:
- name: collector-binaries
+ merge-multiple: true
path: ./bin/
+ pattern: collector-binaries-*
- run: chmod +x bin/*
- name: Download Packages
- uses: actions/download-artifact@v3
+ uses: actions/download-artifact@v4
with:
- name: collector-packages
+ merge-multiple: true
path: ./dist/
+ pattern: collector-packages-*
- name: Add Permissions to Tool Binaries
run: chmod -R +x ./dist
- name: Verify Distribution Files Exist
@@ -594,7 +670,7 @@ jobs:
docker run otel/opentelemetry-collector-contrib-dev:$GITHUB_SHA --version
docker run otel/opentelemetry-collector-contrib-dev:latest --version
- name: Login to Docker Hub
- uses: docker/login-action@v2
+ uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
@@ -608,7 +684,7 @@ jobs:
needs: [lint, unittest, integration-tests, build-package]
if: startsWith(github.ref, 'refs/tags/v') && github.repository == 'open-telemetry/opentelemetry-collector-contrib'
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
- name: Set Release Tag
id: github_tag
run: ./.github/workflows/scripts/set_release_tag.sh
@@ -626,7 +702,7 @@ jobs:
needs: [publish-stable]
if: startsWith(github.ref, 'refs/tags/v') && github.repository == 'open-telemetry/opentelemetry-collector-contrib'
steps:
- - uses: actions/github-script@v6
+ - uses: actions/github-script@v7
with:
script: |
const milestones = await github.rest.issues.listMilestones({
diff --git a/.github/workflows/changelog.yml b/.github/workflows/changelog.yml
index e05e054901d9..d3cf3b18459d 100644
--- a/.github/workflows/changelog.yml
+++ b/.github/workflows/changelog.yml
@@ -12,9 +12,9 @@ on:
- main
env:
- # See: https://github.com/actions/cache/issues/810#issuecomment-1222550359
- # Cache downloads for this workflow consistently run in under 1 minute
- SEGMENT_DOWNLOAD_TIMEOUT_MINS: 5
+ # Make sure to exit early if cache segment download times out after 2 minutes.
+ # We limit cache download as a whole to 5 minutes.
+ SEGMENT_DOWNLOAD_TIMEOUT_MINS: 2
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
@@ -23,18 +23,21 @@ concurrency:
jobs:
changelog:
runs-on: ubuntu-latest
- if: ${{ !contains(github.event.pull_request.labels.*.name, 'dependencies') && !contains(github.event.pull_request.labels.*.name, 'Skip Changelog') && !contains(github.event.pull_request.title, '[chore]')}}
+ if: ${{ github.actor != 'dependabot[bot]' }}
+ env:
+ PR_HEAD: ${{ github.event.pull_request.head.sha }}
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
with:
fetch-depth: 0
- - uses: actions/setup-go@v4
+ - uses: actions/setup-go@v5
with:
- go-version: ~1.19.10
+ go-version: "1.21.7"
cache: false
- name: Cache Go
id: go-cache
+ timeout-minutes: 5
uses: actions/cache@v3
with:
path: |
@@ -42,22 +45,24 @@ jobs:
~/go/pkg/mod
key: changelog-${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
- - name: Ensure no changes to the CHANGELOG
+ - name: Ensure no changes to the CHANGELOG.md or CHANGELOG-API.md
+ if: ${{ !contains(github.event.pull_request.labels.*.name, 'dependencies') && !contains(github.event.pull_request.labels.*.name, 'Skip Changelog') && !contains(github.event.pull_request.title, '[chore]')}}
run: |
- if [[ $(git diff --name-only $(git merge-base origin/main ${{ github.event.pull_request.head.sha }}) ${{ github.event.pull_request.head.sha }} ./CHANGELOG.md) ]]
+ if [[ $(git diff --name-only $(git merge-base origin/main $PR_HEAD) $PR_HEAD ./CHANGELOG*.md) ]]
then
- echo "The CHANGELOG should not be directly modified."
+ echo "CHANGELOG.md and CHANGELOG-API.md should not be directly modified."
echo "Please add a .yaml file to the ./.chloggen/ directory."
echo "See CONTRIBUTING.md for more details."
echo "Alternately, add either \"[chore]\" to the title of the pull request or add the \"Skip Changelog\" label if this job should be skipped."
false
else
- echo "The CHANGELOG was not modified."
+ echo "CHANGELOG.md and CHANGELOG-API.md were not modified."
fi
- name: Ensure ./.chloggen/*.yaml addition(s)
+ if: ${{ !contains(github.event.pull_request.labels.*.name, 'dependencies') && !contains(github.event.pull_request.labels.*.name, 'Skip Changelog') && !contains(github.event.pull_request.title, '[chore]')}}
run: |
- if [[ 1 -gt $(git diff --diff-filter=A --name-only $(git merge-base origin/main ${{ github.event.pull_request.head.sha }}) ${{ github.event.pull_request.head.sha }} ./.chloggen | grep -c \\.yaml) ]]
+ if [[ 1 -gt $(git diff --diff-filter=A --name-only $(git merge-base origin/main $PR_HEAD) $PR_HEAD ./.chloggen | grep -c \\.yaml) ]]
then
echo "No changelog entry was added to the ./.chloggen/ directory."
echo "Please add a .yaml file to the ./.chloggen/ directory."
@@ -75,10 +80,13 @@ jobs:
# In order to validate any links in the yaml file, render the config to markdown
- name: Render .chloggen changelog entries
+ if: ${{ !contains(github.event.pull_request.labels.*.name, 'dependencies') && !contains(github.event.pull_request.labels.*.name, 'Skip Changelog') && !contains(github.event.pull_request.title, '[chore]')}}
run: make chlog-preview > changelog_preview.md
- name: Install markdown-link-check
+ if: ${{ !contains(github.event.pull_request.labels.*.name, 'dependencies') && !contains(github.event.pull_request.labels.*.name, 'Skip Changelog') && !contains(github.event.pull_request.title, '[chore]')}}
run: npm install -g markdown-link-check
- name: Run markdown-link-check
+ if: ${{ !contains(github.event.pull_request.labels.*.name, 'dependencies') && !contains(github.event.pull_request.labels.*.name, 'Skip Changelog') && !contains(github.event.pull_request.title, '[chore]')}}
run: |
markdown-link-check \
--verbose \
diff --git a/.github/workflows/check-links.yaml b/.github/workflows/check-links.yaml
index af6afe9eb535..83817d99f1f5 100644
--- a/.github/workflows/check-links.yaml
+++ b/.github/workflows/check-links.yaml
@@ -13,23 +13,25 @@ jobs:
changedfiles:
name: changed files
runs-on: ubuntu-latest
+ env:
+ PR_HEAD: ${{ github.event.pull_request.head.sha }}
if: ${{ github.actor != 'dependabot[bot]' }}
outputs:
md: ${{ steps.changes.outputs.md }}
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Get changed files
id: changes
run: |
- echo "md=$(git diff --name-only --diff-filter=ACMRTUXB $(git merge-base origin/main ${{ github.event.pull_request.head.sha }}) ${{ github.event.pull_request.head.sha }} | grep .md$ | xargs)" >> $GITHUB_OUTPUT
+ echo "md=$(git diff --name-only --diff-filter=ACMRTUXB $(git merge-base origin/main $PR_HEAD) $PR_HEAD | grep .md$ | xargs)" >> $GITHUB_OUTPUT
check-links:
runs-on: ubuntu-latest
needs: changedfiles
if: ${{needs.changedfiles.outputs.md}}
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
with:
fetch-depth: 0
diff --git a/.github/workflows/close-stale.yaml b/.github/workflows/close-stale.yaml
index bc1df5ddc486..b78d73c099de 100644
--- a/.github/workflows/close-stale.yaml
+++ b/.github/workflows/close-stale.yaml
@@ -12,7 +12,7 @@ jobs:
steps:
- name: Check rate_limit before
run: gh api /rate_limit
- - uses: actions/stale@v8
+ - uses: actions/stale@v9
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-pr-message: 'This PR was marked stale due to lack of activity. It will be closed in 14 days.'
diff --git a/.github/workflows/codeql-analysis.yml b/.github/workflows/codeql-analysis.yml
index 083052f76be3..ec357a5ec15b 100644
--- a/.github/workflows/codeql-analysis.yml
+++ b/.github/workflows/codeql-analysis.yml
@@ -7,6 +7,7 @@ on:
jobs:
CodeQL-Build:
runs-on: macos-latest
+ if: ${{ github.actor != 'dependabot[bot]' }}
env:
# Force CodeQL to run the extraction on the files compiled by our custom
# build command, as opposed to letting the autobuilder figure it out.
@@ -14,15 +15,15 @@ jobs:
CODEQL_EXTRACTOR_GO_BUILD_TRACING: 'on'
steps:
- - uses: actions/checkout@v3
- - uses: actions/setup-go@v4
+ - uses: actions/checkout@v4
+ - uses: actions/setup-go@v5
with:
- go-version: ~1.19.10
+ go-version: "1.21.7"
cache: false
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
- uses: github/codeql-action/init@v2
+ uses: github/codeql-action/init@v3
with:
languages: go
@@ -31,6 +32,6 @@ jobs:
make otelcontribcol
- name: Perform CodeQL Analysis
- uses: github/codeql-action/analyze@v2
+ uses: github/codeql-action/analyze@v3
timeout-minutes: 60
diff --git a/.github/workflows/configs/e2e-kind-config.yaml b/.github/workflows/configs/e2e-kind-config.yaml
new file mode 100644
index 000000000000..21eecccabfc1
--- /dev/null
+++ b/.github/workflows/configs/e2e-kind-config.yaml
@@ -0,0 +1,11 @@
+kind: Cluster
+apiVersion: kind.x-k8s.io/v1alpha4
+kubeadmConfigPatches:
+ - |
+ kind: KubeletConfiguration
+ serverTLSBootstrap: true
+nodes:
+ - role: control-plane
+ labels:
+ # used in k8sattributesprocessor e2e test
+ foo: too
diff --git a/.github/workflows/create-dependabot-pr.yml b/.github/workflows/create-dependabot-pr.yml
deleted file mode 100644
index 911f79dc0ed5..000000000000
--- a/.github/workflows/create-dependabot-pr.yml
+++ /dev/null
@@ -1,20 +0,0 @@
-name: Automation - Dependabot PR
-
-on:
- workflow_dispatch:
-
-jobs:
- create-pr:
- runs-on: ubuntu-latest
- steps:
- - uses: actions/checkout@v3
- - name: Install zsh
- run: sudo apt-get update; sudo apt-get install zsh
- - uses: actions/setup-go@v4
- with:
- go-version: ~1.19.10
- cache: false
- - name: Run dependabot-pr.sh
- run: ./.github/workflows/scripts/dependabot-pr.sh
- env:
- GITHUB_TOKEN: ${{ secrets.OPENTELEMETRYBOT_GITHUB_TOKEN }}
diff --git a/.github/workflows/e2e-tests.yml b/.github/workflows/e2e-tests.yml
index b9c7b98ab88a..b636c5ccbb4f 100644
--- a/.github/workflows/e2e-tests.yml
+++ b/.github/workflows/e2e-tests.yml
@@ -7,19 +7,87 @@ on:
tags:
- 'v[0-9]+.[0-9]+.[0-9]+*'
pull_request:
+ merge_group:
+env:
+ # Make sure to exit early if cache segment download times out after 2 minutes.
+ # We limit cache download as a whole to 5 minutes.
+ SEGMENT_DOWNLOAD_TIMEOUT_MINS: 2
jobs:
+ collector-build:
+ runs-on: ubuntu-latest
+ if: ${{ github.actor != 'dependabot[bot]' }}
+ steps:
+ - name: Checkout
+ uses: actions/checkout@v4
+ - uses: actions/setup-go@v5
+ with:
+ go-version: "1.21.7"
+ cache: false
+ - name: Cache Go
+ id: go-cache
+ timeout-minutes: 5
+ uses: actions/cache@v3
+ with:
+ path: |
+ ~/go/bin
+ ~/go/pkg/mod
+ key: go-cache-${{ runner.os }}-${{ hashFiles('**/go.sum') }}
+ - name: Install dependencies
+ if: steps.go-cache.outputs.cache-hit != 'true'
+ run: make -j2 gomoddownload
+ - name: Build Collector
+ run: make otelcontribcol
+ - name: Upload Collector Binary
+ uses: actions/upload-artifact@v4
+ with:
+ name: collector-binary
+ path: ./bin/*
+
+ supervisor-test:
+ runs-on: ubuntu-latest
+ needs: collector-build
+ steps:
+ - uses: actions/checkout@v4
+ - uses: actions/setup-go@v5
+ with:
+ go-version: "1.21.7"
+ cache: false
+ - name: Cache Go
+ id: go-cache
+ timeout-minutes: 5
+ uses: actions/cache@v3
+ with:
+ path: |
+ ~/go/bin
+ ~/go/pkg/mod
+ key: go-cache-${{ runner.os }}-${{ hashFiles('**/go.sum') }}
+ - name: Install dependencies
+ if: steps.go-cache.outputs.cache-hit != 'true'
+ run: make -j2 gomoddownload
+ - name: Download Collector Binary
+ uses: actions/download-artifact@v4
+ with:
+ name: collector-binary
+ path: bin/
+ - run: chmod +x bin/*
+ - name: Run opampsupervisor e2e tests
+ run: |
+ cd cmd/opampsupervisor
+ go test -v --tags=e2e
+
docker-build:
runs-on: ubuntu-latest
steps:
- name: Checkout
- uses: actions/checkout@v3
- - uses: actions/setup-go@v4
+ uses: actions/checkout@v4
+ - uses: actions/setup-go@v5
with:
- go-version: 1.19
+ go-version: "1.21.7"
cache: false
- name: Cache Go
id: go-cache
+ timeout-minutes: 5
uses: actions/cache@v3
with:
path: |
@@ -36,43 +104,58 @@ jobs:
run: |
docker save otelcontribcol:latest > /tmp/otelcontribcol.tar
- name: Upload artifact
- uses: actions/upload-artifact@v3
+ uses: actions/upload-artifact@v4
with:
name: otelcontribcol
path: /tmp/otelcontribcol.tar
- kubernetes-test:
+
+ kubernetes-test-matrix:
env:
KUBECONFIG: /tmp/kube-config-otelcol-e2e-testing
strategy:
matrix:
- k8s-version: ["v1.26.0", "v1.25.3", "v1.24.7", "v1.23.13"]
+ k8s-version:
+ - "v1.26.0"
+ - "v1.25.3"
+ - "v1.24.7"
+ - "v1.23.13"
+ component:
+ - receiver/k8sclusterreceiver
+ - processor/k8sattributesprocessor
+ - receiver/kubeletstatsreceiver
+ - receiver/k8sobjectsreceiver
runs-on: ubuntu-latest
needs: docker-build
steps:
- - uses: actions/checkout@v3
- - uses: actions/setup-go@v4
+ - uses: actions/checkout@v4
+ - uses: actions/setup-go@v5
with:
- go-version: 1.19
+ go-version: "1.21.7"
cache: false
- name: Cache Go
id: go-cache
+ timeout-minutes: 5
uses: actions/cache@v3
with:
path: |
~/go/bin
~/go/pkg/mod
- key: go-cache-${{ runner.os }}-${{ hashFiles('**/go.sum') }}
+ key: go-cache-${{ runner.os }}-${{ hashFiles('**/go.sum') }}
- name: Install dependencies
if: steps.go-cache.outputs.cache-hit != 'true'
run: make -j2 gomoddownload
- name: Create kind cluster
- uses: helm/kind-action@v1.8.0
+ uses: helm/kind-action@v1.9.0
with:
node_image: kindest/node:${{ matrix.k8s-version }}
kubectl_version: ${{ matrix.k8s-version }}
cluster_name: kind
+ config: ./.github/workflows/configs/e2e-kind-config.yaml
+ - name: Fix kubelet TLS server certificates
+ run: |
+ kubectl get csr -o=jsonpath='{range.items[?(@.spec.signerName=="kubernetes.io/kubelet-serving")]}{.metadata.name}{" "}{end}' | xargs kubectl certificate approve
- name: Download artifact
- uses: actions/download-artifact@v3
+ uses: actions/download-artifact@v4
with:
name: otelcontribcol
path: /tmp
@@ -82,12 +165,25 @@ jobs:
- name: Kind load image
run: |
kind load docker-image otelcontribcol:latest --name kind
- - name: run k8sclusterreceiver e2e tests
+ - name: Run e2e tests
run: |
- cd receiver/k8sclusterreceiver
- go test -v --tags=e2e
- - name: run k8sattributesprocessor e2e tests
- run: |
- cd processor/k8sattributesprocessor
+ cd ${{ matrix.component }}
go test -v --tags=e2e
+ kubernetes-test:
+ if: ${{ github.actor != 'dependabot[bot]' && always() }}
+ runs-on: ubuntu-latest
+ needs: [ kubernetes-test-matrix ]
+ steps:
+ - name: Print result
+ run: echo ${{ needs.kubernetes-test-matrix.result }}
+ - name: Interpret result
+ run: |
+ if [[ success == ${{ needs.kubernetes-test-matrix.result }} ]]
+ then
+ echo "All matrix jobs passed!"
+ else
+ echo "One or more matrix jobs failed."
+ false
+ fi
+
diff --git a/.github/workflows/generate-component-labels.yml b/.github/workflows/generate-component-labels.yml
index 1af6c28406d1..c7c194677d28 100644
--- a/.github/workflows/generate-component-labels.yml
+++ b/.github/workflows/generate-component-labels.yml
@@ -13,7 +13,7 @@ jobs:
runs-on: ubuntu-latest
if: ${{ github.repository_owner == 'open-telemetry' }}
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
- name: Generate component labels
run: ./.github/workflows/scripts/generate-component-labels.sh
diff --git a/.github/workflows/generate-weekly-report.yml b/.github/workflows/generate-weekly-report.yml
new file mode 100644
index 000000000000..41048d45e9e7
--- /dev/null
+++ b/.github/workflows/generate-weekly-report.yml
@@ -0,0 +1,24 @@
+# This action generates a weekly report as a github issue
+# More details in https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/24672
+
+name: 'Generate Weekly Report'
+on:
+ workflow_dispatch:
+ schedule:
+ # run every tuesday at 1am UTC
+ - cron: "0 1 * * 2"
+
+jobs:
+ get_issues:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+ - run: npm install js-yaml
+ working-directory: ./.github/workflows/scripts
+ - uses: actions/github-script@v7
+ id: get-issues
+ with:
+ retries: 3
+ script: |
+ const script = require('.github/workflows/scripts/generate-weekly-report.js')
+ await script({github, context})
diff --git a/.github/workflows/load-tests.yml b/.github/workflows/load-tests.yml
index 7e280416c54b..79b5f952b3bf 100644
--- a/.github/workflows/load-tests.yml
+++ b/.github/workflows/load-tests.yml
@@ -4,7 +4,6 @@ on:
branches: [ main ]
tags:
- 'v[0-9]+.[0-9]+.[0-9]+*'
- pull_request:
# Do not cancel this workflow on main. See https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/16616
concurrency:
@@ -12,25 +11,26 @@ concurrency:
cancel-in-progress: true
env:
- # See: https://github.com/actions/cache/issues/810#issuecomment-1222550359
- # Cache downloads for this workflow consistently run in under 2 minutes
- SEGMENT_DOWNLOAD_TIMEOUT_MINS: 5
+ # Make sure to exit early if cache segment download times out after 2 minutes.
+ # We limit cache download as a whole to 5 minutes.
+ SEGMENT_DOWNLOAD_TIMEOUT_MINS: 2
jobs:
setup-environment:
timeout-minutes: 30
- runs-on: ubuntu-latest
+ runs-on: self-hosted
if: ${{ github.actor != 'dependabot[bot]' }}
outputs:
loadtest_matrix: ${{ steps.splitloadtest.outputs.loadtest_matrix }}
steps:
- - uses: actions/checkout@v3
- - uses: actions/setup-go@v4
+ - uses: actions/checkout@v4
+ - uses: actions/setup-go@v5
with:
- go-version: 1.19
+ go-version: "1.21.7"
cache: false
- name: Cache Go
id: go-cache
+ timeout-minutes: 5
uses: actions/cache@v3
with:
path: |
@@ -45,29 +45,30 @@ jobs:
if: steps.go-cache.outputs.cache-hit != 'true'
run: make install-tools
- run: make oteltestbedcol
- - name: Upload Collector Binaries
- uses: actions/upload-artifact@v3
+ - name: Upload Testbed Binaries
+ uses: actions/upload-artifact@v4
with:
- name: collector-binaries
+ name: testbed-binaries
path: ./bin/*
- name: Split Loadtest Jobs
id: splitloadtest
run: ./.github/workflows/scripts/setup_e2e_tests.sh
loadtest:
- runs-on: ubuntu-latest
+ runs-on: self-hosted
needs: [setup-environment]
strategy:
fail-fast: false
matrix: ${{ fromJson(needs.setup-environment.outputs.loadtest_matrix) }}
steps:
- - uses: actions/checkout@v3
- - uses: actions/setup-go@v4
+ - uses: actions/checkout@v4
+ - uses: actions/setup-go@v5
with:
- go-version: 1.19
+ go-version: "1.21.7"
cache: false
- name: Cache Go
id: go-cache
+ timeout-minutes: 5
uses: actions/cache@v3
with:
path: |
@@ -82,10 +83,10 @@ jobs:
if: steps.go-cache.outputs.cache-hit != 'true'
run: make install-tools
- run: mkdir -p results && touch results/TESTRESULTS.md
- - name: Download Collector Binaries
- uses: actions/download-artifact@v3
+ - name: Download Testbed Binaries
+ uses: actions/download-artifact@v4
with:
- name: collector-binaries
+ name: testbed-binaries
path: bin/
- run: chmod +x bin/*
- name: Loadtest
@@ -102,27 +103,27 @@ jobs:
- name: Upload Test Results
if: ${{ failure() || success() }}
continue-on-error: true
- uses: actions/upload-artifact@v3
+ uses: actions/upload-artifact@v4
with:
path: ./*.tar
- run: cp testbed/tests/results/benchmarks.json testbed/tests/results/${{steps.filename.outputs.name}}.json
- name: Upload benchmarks.json
- uses: actions/upload-artifact@v3
+ uses: actions/upload-artifact@v4
with:
name: benchmark-results
path: testbed/tests/results/${{steps.filename.outputs.name}}.json
- name: GitHub Issue Generator
if: ${{ failure() && github.ref == 'refs/heads/main' }}
- run: issuegenerator $TEST_RESULTS
+ run: ./.tools/issuegenerator $TEST_RESULTS
update-benchmarks:
runs-on: ubuntu-latest
needs: [loadtest]
if: github.event_name != 'pull_request'
steps:
- - uses: actions/checkout@v3
- - uses: actions/download-artifact@v3
+ - uses: actions/checkout@v4
+ - uses: actions/download-artifact@v4
with:
name: benchmark-results
path: results
@@ -132,6 +133,7 @@ jobs:
tool: 'customSmallerIsBetter'
output-file-path: output.json
gh-pages-branch: benchmarks
+ max-items-in-chart: 100
github-token: ${{ secrets.GITHUB_TOKEN }}
benchmark-data-dir-path: "docs/benchmarks/loadtests"
auto-push: true
diff --git a/.github/workflows/mark-issues-as-stale.yml b/.github/workflows/mark-issues-as-stale.yml
index 1834b224297a..8cb4c88d35f7 100644
--- a/.github/workflows/mark-issues-as-stale.yml
+++ b/.github/workflows/mark-issues-as-stale.yml
@@ -8,7 +8,7 @@ jobs:
runs-on: ubuntu-latest
if: ${{ github.repository_owner == 'open-telemetry' }}
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
- name: Run mark-issues-as-stale.sh
run: ./.github/workflows/scripts/mark-issues-as-stale.sh
diff --git a/.github/workflows/milestone-add-to-pr.yml b/.github/workflows/milestone-add-to-pr.yml
index 526aac4d063e..eba56e603175 100644
--- a/.github/workflows/milestone-add-to-pr.yml
+++ b/.github/workflows/milestone-add-to-pr.yml
@@ -13,7 +13,7 @@ jobs:
if: github.event.pull_request.merged
runs-on: ubuntu-latest
steps:
- - uses: actions/github-script@v6
+ - uses: actions/github-script@v7
with:
script: |
const milestones = await github.rest.issues.listMilestones({
diff --git a/.github/workflows/ping-codeowners-issues.yml b/.github/workflows/ping-codeowners-issues.yml
index b1184540520b..9a58d23f1f69 100644
--- a/.github/workflows/ping-codeowners-issues.yml
+++ b/.github/workflows/ping-codeowners-issues.yml
@@ -8,7 +8,7 @@ jobs:
runs-on: ubuntu-latest
if: ${{ github.repository_owner == 'open-telemetry' }}
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
- name: Run ping-codeowners-issues.sh
run: ./.github/workflows/scripts/ping-codeowners-issues.sh
@@ -16,4 +16,3 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
ISSUE: ${{ github.event.issue.number }}
COMPONENT: ${{ github.event.label.name }}
- SENDER: ${{ github.event.sender.login }}
diff --git a/.github/workflows/ping-codeowners-on-new-issue.yml b/.github/workflows/ping-codeowners-on-new-issue.yml
index 02c3b9a23c20..f4a2025afe9d 100644
--- a/.github/workflows/ping-codeowners-on-new-issue.yml
+++ b/.github/workflows/ping-codeowners-on-new-issue.yml
@@ -8,7 +8,7 @@ jobs:
runs-on: ubuntu-latest
if: ${{ github.repository_owner == 'open-telemetry' }}
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
- name: Run ping-codeowners-on-new-issue.sh
run: ./.github/workflows/scripts/ping-codeowners-on-new-issue.sh
diff --git a/.github/workflows/ping-codeowners-prs.yml b/.github/workflows/ping-codeowners-prs.yml
index 51335c8ab264..40e6c46c83e1 100644
--- a/.github/workflows/ping-codeowners-prs.yml
+++ b/.github/workflows/ping-codeowners-prs.yml
@@ -8,7 +8,7 @@ jobs:
runs-on: ubuntu-latest
if: ${{ github.actor != 'dependabot[bot]' && github.repository_owner == 'open-telemetry' }}
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
- name: Run ping-codeowners-prs.sh
run: ./.github/workflows/scripts/ping-codeowners-prs.sh
diff --git a/.github/workflows/prepare-release.yml b/.github/workflows/prepare-release.yml
index 63413bdbbed3..77484f763001 100644
--- a/.github/workflows/prepare-release.yml
+++ b/.github/workflows/prepare-release.yml
@@ -17,16 +17,16 @@ jobs:
prepare-release:
runs-on: ubuntu-latest
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
with:
repository: 'open-telemetry/opentelemetry-collector'
path: opentelemetry-collector
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
with:
path: opentelemetry-collector-contrib
- - uses: actions/setup-go@v4
+ - uses: actions/setup-go@v5
with:
- go-version: ~1.19.10
+ go-version: "1.21.7"
cache: false
- name: Prepare release for contrib
working-directory: opentelemetry-collector-contrib
diff --git a/.github/workflows/prometheus-compliance-tests.yml b/.github/workflows/prometheus-compliance-tests.yml
index a5d22160d3be..0cc65dca4269 100644
--- a/.github/workflows/prometheus-compliance-tests.yml
+++ b/.github/workflows/prometheus-compliance-tests.yml
@@ -5,6 +5,7 @@ on:
tags:
- 'v[0-9]+.[0-9]+.[0-9]+*'
pull_request:
+ merge_group:
# Do not cancel this workflow on main. See https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/16616
concurrency:
@@ -12,24 +13,25 @@ concurrency:
cancel-in-progress: true
env:
- # See: https://github.com/actions/cache/issues/810#issuecomment-1222550359
- # Cache downloads for this workflow consistently run in under 1 minute
- SEGMENT_DOWNLOAD_TIMEOUT_MINS: 5
+ # Make sure to exit early if cache segment download times out after 2 minutes.
+ # We limit cache download as a whole to 5 minutes.
+ SEGMENT_DOWNLOAD_TIMEOUT_MINS: 2
jobs:
prometheus-compliance-tests:
runs-on: ubuntu-latest
if: ${{ github.actor != 'dependabot[bot]' }}
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
with:
path: opentelemetry-collector-contrib
- - uses: actions/setup-go@v4
+ - uses: actions/setup-go@v5
with:
- go-version: ~1.19.10
+ go-version: "1.21.7"
cache: false
- name: Cache Go
id: go-cache
+ timeout-minutes: 5
uses: actions/cache@v3
with:
path: |
@@ -40,13 +42,14 @@ jobs:
- run: make otelcontribcol
working-directory: opentelemetry-collector-contrib
- name: Checkout compliance repo
- uses: actions/checkout@v3
+ uses: actions/checkout@v4
with:
repository: prometheus/compliance
path: compliance
- ref: f0482884578bac67b053e3eaa1ca7f783d146557
- name: Copy binary to compliance directory
- run: mkdir compliance/remote_write_sender/bin && cp opentelemetry-collector-contrib/bin/otelcontribcol_linux_amd64 compliance/remote_write_sender/bin/otelcol_linux_amd64
+ # The required name of the downloaded artifact is `otelcol_0.42.0_linux_amd64`, so we place the collector contrib artifact under the same name in the bin folder to run.
+ # Source: https://github.com/prometheus/compliance/blob/12cbdf92abf7737531871ab7620a2de965fc5382/remote_write_sender/targets/otel.go#L8
+ run: mkdir compliance/remote_write_sender/bin && cp opentelemetry-collector-contrib/bin/otelcontribcol_linux_amd64 compliance/remote_write_sender/bin/otelcol_0.42.0_linux_amd64
- name: Run compliance tests
run: go test -v --tags=compliance -run "TestRemoteWrite/otel/.+" ./ |& tee ./test-report.txt
working-directory: compliance/remote_write_sender
diff --git a/.github/workflows/scripts/add-component-options.sh b/.github/workflows/scripts/add-component-options.sh
deleted file mode 100755
index 372d72024c5f..000000000000
--- a/.github/workflows/scripts/add-component-options.sh
+++ /dev/null
@@ -1,51 +0,0 @@
-#!/usr/bin/env sh
-#
-# Copyright The OpenTelemetry Authors
-# SPDX-License-Identifier: Apache-2.0
-#
-# Takes the list of components from the CODEOWNERS file and inserts them
-# as a YAML list in a GitHub issue template, then prints out the resulting
-# contents.
-#
-# Note that this is script is intended to be POSIX-compliant since it is
-# intended to also be called from the Makefile on developer machines,
-# which aren't guaranteed to have Bash or a GNU userland installed.
-
-if [ -z "${FILE}" ]; then
- echo 'FILE is empty, please ensure it is set.'
- exit 1
-fi
-
-CUR_DIRECTORY=$(dirname "$0")
-
-# Get the line number for text within a file
-get_line_number() {
- text=$1
- file=$2
-
- grep -n "${text}" "${file}" | awk '{ print $1 }' | grep -oE '[0-9]+'
-}
-
-LABELS=""
-
-START_LINE=$(get_line_number '# Start Collector components list' "${FILE}")
-END_LINE=$(get_line_number '# End Collector components list' "${FILE}")
-TOTAL_LINES=$(wc -l "${FILE}" | awk '{ print $1 }')
-
-head -n "${START_LINE}" "${FILE}"
-for COMPONENT in $(sh "${CUR_DIRECTORY}/get-components.sh"); do
- TYPE=$(echo "${COMPONENT}" | cut -f1 -d'/')
- REST=$(echo "${COMPONENT}" | cut -f2- -d'/' | sed "s%${TYPE}/%/%" | sed "s%${TYPE}\$%%")
- LABEL=""
-
- if [ -z "${TYPE}" ] | [ -z "${REST}" ]; then
- LABEL="${COMPONENT}"
- else
- LABEL="${TYPE}/${REST}"
- fi
-
- LABELS="${LABELS}${LABEL}\n"
-done
-printf "${LABELS}" | sort | awk '{ printf " - %s\n",$1 }'
-tail -n $((TOTAL_LINES-END_LINE+1)) "${FILE}"
-
diff --git a/.github/workflows/scripts/add-labels.sh b/.github/workflows/scripts/add-labels.sh
index 7f94993cddac..93d081abaa45 100755
--- a/.github/workflows/scripts/add-labels.sh
+++ b/.github/workflows/scripts/add-labels.sh
@@ -59,7 +59,7 @@ for LABEL_REQ in ${LABELS}; do
# Labels added by a GitHub Actions workflow don't trigger other workflows
# by design, so we have to manually ping code owners here.
- COMPONENT="${LABEL}" ISSUE=${ISSUE} SENDER="${SENDER}" bash "${CUR_DIRECTORY}/ping-codeowners.sh"
+ COMPONENT="${LABEL}" ISSUE=${ISSUE} SENDER="${SENDER}" bash "${CUR_DIRECTORY}/ping-codeowners-issues.sh"
else
gh issue edit "${ISSUE}" --remove-label "${LABEL}"
fi
diff --git a/.github/workflows/scripts/check-codeowners.sh b/.github/workflows/scripts/check-codeowners.sh
index ffd55d87ef9e..1aee6d5ecc7e 100755
--- a/.github/workflows/scripts/check-codeowners.sh
+++ b/.github/workflows/scripts/check-codeowners.sh
@@ -63,7 +63,7 @@ check_component_existence() {
do
if [[ $line =~ ^[^#\*] ]]; then
COMPONENT_PATH=$(echo "$line" | cut -d" " -f1)
- if [ ! -d "$COMPONENT_PATH" ]; then
+ if [ ! -e "$COMPONENT_PATH" ]; then
echo "\"$COMPONENT_PATH\" does not exist as specified in CODEOWNERS"
((NOT_EXIST_COMPONENTS=NOT_EXIST_COMPONENTS+1))
fi
diff --git a/.github/workflows/scripts/check-collector-module-version.sh b/.github/workflows/scripts/check-collector-module-version.sh
index 81a9437f8d01..8a00271161d4 100755
--- a/.github/workflows/scripts/check-collector-module-version.sh
+++ b/.github/workflows/scripts/check-collector-module-version.sh
@@ -12,6 +12,8 @@ source ./internal/buildscripts/modules
set -eu -o pipefail
+mod_files=$(find . -type f -name "go.mod")
+
# Return the collector main core version
get_collector_version() {
collector_module="$1"
@@ -31,59 +33,36 @@ get_collector_version() {
check_collector_versions_correct() {
collector_module="$1"
collector_mod_version="$2"
- incorrect_version=0
- mod_files=$(find . -type f -name "go.mod")
+ echo "Checking $collector_module is used with $collector_mod_version"
# Loop through all the module files, checking the collector version
for mod_file in $mod_files; do
- if grep -q "$collector_module" "$mod_file"; then
- mod_line=$(grep -m1 "$collector_module" "$mod_file")
- version=$(echo "$mod_line" | cut -d" " -f2)
-
- # To account for a module on its own 'require' line,
- # the version field is shifted right by 1. Match
- # with or without a trailing space at the end to account
- # for the space at the end of some collector modules.
- if [ "$version" == "$collector_module" ] || [ "$version " == "$collector_module" ]; then
- version=$(echo "$mod_line" | cut -d" " -f3)
- fi
-
- if [ "$version" != "$collector_mod_version" ]; then
- incorrect_version=$((incorrect_version+1))
- echo "Incorrect version \"$version\" of \"$collector_module\" is included in \"$mod_file\". It should be version \"$collector_mod_version\"."
- fi
+ if [ "$(uname)" == "Darwin" ]; then
+ sed -i '' "s|$collector_module [^ ]*|$collector_module $collector_mod_version|g" $mod_file
+ else
+ sed -i'' "s|$collector_module [^ ]*|$collector_module $collector_mod_version|g" $mod_file
fi
done
-
- echo "There are $incorrect_version incorrect \"$collector_module\" version(s) in the module files."
- if [ "$incorrect_version" -gt 0 ]; then
- exit 1
- fi
}
MAIN_MOD_FILE="./go.mod"
+
+BETA_MODULE="go.opentelemetry.io/collector"
# Note space at end of string. This is so it filters for the exact string
# only and does not return string which contains this string as a substring.
-BETA_MODULE="go.opentelemetry.io/collector "
-BETA_MOD_VERSION=$(get_collector_version "$BETA_MODULE" "$MAIN_MOD_FILE")
+BETA_MOD_VERSION=$(get_collector_version "$BETA_MODULE " "$MAIN_MOD_FILE")
check_collector_versions_correct "$BETA_MODULE" "$BETA_MOD_VERSION"
for mod in ${beta_modules[@]}; do
check_collector_versions_correct "$mod" "$BETA_MOD_VERSION"
done
-# Check RC modules
-RC_MODULE="go.opentelemetry.io/collector/pdata "
-RC_MOD_VERSION=$(get_collector_version "$RC_MODULE" "$MAIN_MOD_FILE")
-check_collector_versions_correct "$RC_MODULE" "$RC_MOD_VERSION"
-for mod in ${rc_modules[@]}; do
- check_collector_versions_correct "$mod" "$RC_MOD_VERSION"
+# Check stable modules, none currently exist, uncomment when pdata is 1.0.0
+STABLE_MODULE="go.opentelemetry.io/collector/pdata"
+STABLE_MOD_VERSION=$(get_collector_version "$STABLE_MODULE" "$MAIN_MOD_FILE")
+check_collector_versions_correct "$STABLE_MODULE" "$STABLE_MOD_VERSION"
+for mod in ${stable_modules[@]}; do
+ check_collector_versions_correct "$mod" "$STABLE_MOD_VERSION"
done
-# Check stable modules, none currently exist, uncomment when pdata is 1.0.0
-# STABLE_MODULE="go.opentelemetry.io/collector/pdata "
-# STABLE_MOD_VERSION=$(get_collector_version "$STABLE_MODULE" "$MAIN_MOD_FILE")
-# check_collector_versions_correct "$STABLE_MODULE" "$STABLE_MOD_VERSION"
-# for mod in ${stable_modules[@]}; do
-# check_collector_versions_correct "$mod" "$STABLE_MOD_VERSION"
-# done
\ No newline at end of file
+git diff --exit-code
diff --git a/.github/workflows/scripts/dependabot-pr.sh b/.github/workflows/scripts/dependabot-pr.sh
deleted file mode 100755
index eb771351fb53..000000000000
--- a/.github/workflows/scripts/dependabot-pr.sh
+++ /dev/null
@@ -1,50 +0,0 @@
-#!/bin/zsh -ex
-
-# Copyright The OpenTelemetry Authors
-# SPDX-License-Identifier: Apache-2.0
-
-git config user.name opentelemetrybot
-git config user.email 107717825+opentelemetrybot@users.noreply.github.com
-
-PR_NAME=dependabot-prs/`date +'%Y-%m-%dT%H%M%S'`
-git checkout -b $PR_NAME
-
-IFS=$'\n'
-requests=($( gh pr list --search "author:app/dependabot" --limit 1000 --json title --jq '.[].title' | sort ))
-message=""
-dirs=(`find . -type f -name "go.mod" -exec dirname {} \; | sort | egrep '^./'`)
-
-declare -A mods
-
-for line in $requests; do
- echo $line
- if [[ $line != Bump* ]]; then
- continue
- fi
-
- module=$(echo $line | cut -f 2 -d " ")
- version=$(echo $line | cut -f 6 -d " ")
-
- mods[$module]=$version
- message+=$line
- message+=$'\n'
-done
-
-for module version in ${(kv)mods}; do
- topdir=`pwd`
- for dir in $dirs; do
- echo "checking $dir"
- cd $dir && if grep -q "$module " go.mod; then go get "$module"@v"$version"; fi
- cd $topdir
- done
-done
-
-make gotidy genotelcontribcol genoteltestbedcol otelcontribcol
-
-git add go.sum go.mod
-git add "**/go.sum" "**/go.mod"
-git commit -m "dependabot updates `date`
-$message"
-git push origin $PR_NAME
-
-gh pr create --title "[chore] dependabot updates `date`" --body "$message"
diff --git a/.github/workflows/scripts/generate-weekly-report.js b/.github/workflows/scripts/generate-weekly-report.js
new file mode 100644
index 000000000000..7b870a4b740c
--- /dev/null
+++ b/.github/workflows/scripts/generate-weekly-report.js
@@ -0,0 +1,439 @@
+// Copyright The OpenTelemetry Authors
+// SPDX-License-Identifier: Apache-2.0
+
+const fs = require('fs');
+const path = require('path');
+const yaml = require('js-yaml');
+
+
+const REPO_NAME = "opentelemetry-collector-contrib"
+const REPO_OWNER = "open-telemetry"
+
+function debug(msg) {
+ console.log(JSON.stringify(msg, null, 2))
+}
+
+async function getIssues(octokit, queryParams, filterPrs = true) {
+ let allIssues = [];
+ try {
+ while (true) {
+ const response = await octokit.issues.listForRepo(queryParams);
+ // filter out pull requests
+ const issues = filterPrs ? response.data.filter(issue => !issue.pull_request) : response.data;
+ allIssues = allIssues.concat(issues);
+
+ // Check the 'link' header to see if there are more pages
+ const linkHeader = response.headers.link;
+ if (!linkHeader || !linkHeader.includes('rel="next"')) {
+ break;
+ }
+
+ queryParams.page++;
+ }
+ return allIssues;
+ } catch (error) {
+ console.error('Error fetching issues:', error);
+ return [];
+ }
+}
+
+function genLookbackDates() {
+ const now = new Date()
+ const midnightYesterday = new Date(
+ Date.UTC(
+ now.getUTCFullYear(),
+ now.getUTCMonth(),
+ now.getUTCDate(),
+ 0, 0, 0, 0
+ )
+ );
+ const sevenDaysAgo = new Date(midnightYesterday);
+ sevenDaysAgo.setDate(midnightYesterday.getDate() - 7);
+ return { sevenDaysAgo, midnightYesterday };
+}
+
+function filterOnDateRange({ issue, sevenDaysAgo, midnightYesterday }) {
+ const createdAt = new Date(issue.created_at);
+ return createdAt >= sevenDaysAgo && createdAt <= midnightYesterday;
+}
+
+async function getNewIssues({octokit, context}) {
+ const { sevenDaysAgo, midnightYesterday } = genLookbackDates();
+ const queryParams = {
+ owner: REPO_OWNER,
+ repo: REPO_NAME,
+ state: 'all', // To get both open and closed issues
+ per_page: 100, // Number of items per page (maximum allowed)
+ page: 1, // Start with page 1
+ since: sevenDaysAgo.toISOString(),
+ };
+
+ try {
+ const allIssues = await getIssues(octokit, queryParams)
+ const filteredIssues = allIssues.filter(issue => filterOnDateRange({ issue, sevenDaysAgo, midnightYesterday }));
+ return filteredIssues;
+ } catch (error) {
+ console.error('Error fetching issues:', error);
+ return [];
+ }
+}
+
+async function getTargetLabelIssues({octokit, labels, filterPrs, context}) {
+ const queryParams = {
+ owner: REPO_OWNER,
+ repo: REPO_NAME,
+ state: 'open',
+ per_page: 100, // Number of items per page (maximum allowed)
+ page: 1, // Start with page 1
+ labels
+ };
+ debug({msg: "fetching issues", queryParams})
+ try {
+ const allIssues = await getIssues(octokit, queryParams, filterPrs)
+ return allIssues;
+ } catch (error) {
+ console.error('Error fetching issues:', error);
+ return [];
+ }
+}
+
+/**
+ * Get data required for issues report
+ */
+async function getIssuesData({octokit, context}) {
+ const targetLabels = {
+ "needs triage": {
+ filterPrs: true,
+ alias: "issuesTriage",
+ },
+ "ready to merge": {
+ filterPrs: false,
+ alias: "issuesReadyToMerge",
+ },
+ "Sponsor Needed": {
+ filterPrs: true,
+ alias: "issuesSponsorNeeded",
+ },
+ };
+
+ const issuesNew = await getNewIssues({octokit, context});
+ const issuesWithLabels = {};
+ for (const lbl of Object.keys(targetLabels)) {
+ const filterPrs = targetLabels[lbl].filterPrs;
+ const resp = await getTargetLabelIssues({octokit, labels: lbl, filterPrs, context});
+ issuesWithLabels[lbl] = resp;
+ }
+
+ // tally results
+ const stats = {
+ issuesNew: {
+ title: "New issues",
+ count: 0,
+ data: []
+ },
+ issuesTriage: {
+ title: "Issues needing triage",
+ count: 0,
+ data: []
+ },
+ issuesReadyToMerge: {
+ title: "Issues ready to merge",
+ count: 0,
+ data: []
+ },
+ issuesSponsorNeeded: {
+ title: "Issues needing sponsorship",
+ count: 0,
+ data: []
+ },
+ issuesNewSponsorNeeded: {
+ title: "New issues needing sponsorship",
+ count: 0,
+ data: []
+ },
+ }
+
+ // add new issues
+ issuesNew.forEach(issue => {
+ stats.issuesNew.count++;
+ const { html_url: url, title, number } = issue;
+ stats.issuesNew.data.push({ url, title, number });
+ });
+
+ // add issues with labels
+ for (const lbl of Object.keys(targetLabels)) {
+ const alias = targetLabels[lbl].alias;
+ stats[alias].count = issuesWithLabels[lbl].length;
+ stats[alias].data = issuesWithLabels[lbl].map(issue => {
+ const { html_url: url, title, number } = issue;
+ return { url, title, number };
+ })
+ }
+
+ // add new issues with sponsor needed label
+ const { sevenDaysAgo, midnightYesterday } = genLookbackDates();
+ const sponsorNeededIssues = issuesWithLabels["Sponsor Needed"].filter(issue => filterOnDateRange({ issue, sevenDaysAgo, midnightYesterday }));
+ sponsorNeededIssues.forEach(issue => {
+ stats.issuesNewSponsorNeeded.count++;
+ const { html_url: url, title, number } = issue;
+ stats.issuesNewSponsorNeeded.data.push({ url, title, number });
+ });
+ return stats
+}
+
+function generateReport({ issuesData, previousReport, componentData }) {
+ const out = [
+ `## Format`,
+ "- `{CATEGORY}: {COUNT} ({CHANGE_FROM_PREVIOUS_WEEK})`",
+ "## Issues Report",
+ ''];
+
+ // generate report for issues
+ for (const lbl of Object.keys(issuesData)) {
+ const section = [``];
+ const { count, data, title } = issuesData[lbl];
+
+ if (previousReport === null) {
+ section.push(`- ${title}: ${count}`);
+ } else {
+ const previousCount = previousReport.issuesData[lbl].count;
+ section.push(`
- ${title}: ${count} (${count - previousCount})`);
+ }
+
+ // generate summary if issues exist
+ // NOTE: the newline after is required for markdown to render correctly
+ if (data.length !== 0) {
+ section.push(`
+ Issues
\n`);
+ section.push(`${data.map((issue) => {
+ const { url, title, number } = issue;
+ return `- [ ] ${title} ([#${number}](${url}))`;
+ }).join('\n')}
+ `);
+ }
+
+ section.push(' ');
+ out.push(section.join('\n'));
+ }
+ out.push('
');
+
+ // generate report for components
+ out.push('\n## Components Report', '');
+ for (const lbl of Object.keys(componentData)) {
+ const section = [``];
+ const data = componentData[lbl];
+ const count = Object.keys(data).length;
+
+ section.push(`- ${lbl}: ${count}`);
+ if (data.length !== 0) {
+ // NOTE: the newline after is required for markdown to render correctly
+ section.push(`
+ Components
\n`);
+ section.push(`${Object.keys(data).map((compName) => {
+ const {stability} = data[compName]
+ return `- [ ] ${compName}: ${JSON.stringify(stability)}`
+ }).join('\n')}
+ `);
+ }
+ section.push(' ');
+ out.push(section.join('\n'));
+ }
+ out.push('
');
+
+ // add json data
+ out.push('\n ## JSON Data');
+ out.push('');
+ out.push(`
+Expand
+
+{
+ "issuesData": ${JSON.stringify(issuesData, null, 2)},
+ "componentData": ${JSON.stringify(componentData, null, 2)}
+}
+
+ `);
+ const report = out.join('\n');
+ return report;
+}
+
+async function createIssue({ octokit, lookbackData, report, context }) {
+ const title = `Weekly Report: ${lookbackData.sevenDaysAgo.toISOString().slice(0, 10)} - ${lookbackData.midnightYesterday.toISOString().slice(0, 10)}`;
+ return octokit.issues.create({
+ // NOTE: we use the owner from the context because folks forking this repo might not have permission to (nor should they when developing)
+ // create issues in the upstream repository
+ owner: context.payload.repository.owner.login,
+ repo: REPO_NAME,
+ title,
+ body: report,
+ labels: ["report"]
+ })
+}
+
+async function getLastWeeksReport({ octokit, since, context }) {
+ const issues = await octokit.issues.listForRepo({
+
+ owner: context.payload.repository.owner.login,
+ repo: REPO_NAME,
+ state: 'all', // To get both open and closed issues
+ labels: ["report"],
+ since: since.toISOString(),
+ per_page: 1,
+ sort: "created",
+ direction: "asc"
+ });
+ if (issues.data.length === 0) {
+ return null;
+ }
+ // grab earliest issue if multiple
+ return issues.data[0];
+}
+
+function parseJsonFromText(text) {
+ // Use regex to find the JSON data within the tags
+ const regex = /\s*([\s\S]*?)\s*<\/pre>/;
+ const match = text.match(regex);
+
+ if (match && match[1]) {
+ // Parse the found string to JSON
+ return JSON.parse(match[1]);
+ } else {
+ throw new Error("JSON data not found");
+ }
+}
+
+async function processIssues({ octokit, context, lookbackData }) {
+ const issuesData = await getIssuesData({octokit, context});
+
+ const prevReportLookback = new Date(lookbackData.sevenDaysAgo)
+ prevReportLookback.setDate(prevReportLookback.getDate() - 7)
+ const previousReportIssue = await getLastWeeksReport({octokit, since: prevReportLookback, context});
+ // initialize to zeros
+ let previousReport = null;
+
+ if (previousReportIssue !== null) {
+ const {created_at, id, url, title} = previousReportIssue;
+ debug({ msg: "previous issue", created_at, id, url, title })
+ previousReport = parseJsonFromText(previousReportIssue.body)
+ }
+
+ return {issuesData, previousReport}
+
+
+}
+
+const findFilesByName = (startPath, filter) => {
+ let results = [];
+
+ // Check if directory exists
+ let files;
+ try {
+ files = fs.readdirSync(startPath);
+ } catch (error) {
+ console.error("Error reading directory: ", startPath, error);
+ return [];
+ }
+
+ for (let i = 0; i < files.length; i++) {
+ const filename = path.join(startPath, files[i]);
+ let stat;
+ try {
+ stat = fs.lstatSync(filename);
+ } catch (error) {
+ console.error("Error stating file: ", filename, error);
+ continue;
+ }
+
+ if (stat.isDirectory()) {
+ const innerResults = findFilesByName(filename, filter); // Recursive call
+ results = results.concat(innerResults);
+ } else if (path.basename(filename) == filter) {
+ results.push(filename);
+ }
+ }
+ return results;
+};
+
+function processFiles(files) {
+ const results = {};
+
+ for (let filePath of files) {
+ const name = path.basename(path.dirname(filePath)); // Directory of the file
+ const fileData = fs.readFileSync(filePath, 'utf8'); // Read the file as a string
+
+ let data;
+ try {
+ data = yaml.load(fileData); // Parse YAML
+ } catch (err) {
+ console.error(`Error parsing YAML for file ${filePath}:`, err);
+ continue; // Skip this file if there's an error in parsing
+ }
+
+ let component = path.basename(path.dirname(path.dirname(filePath)));
+ try {
+ // if component is defined in metadata status, prefer to use that
+ component = data.status.class;
+ } catch(err) {
+ }
+
+ if (!results[component]) {
+ results[component] = {};
+ }
+
+ results[component][name] = {
+ path: filePath,
+ data
+ };
+ }
+
+ return results;
+}
+
+const processStatusResults = (results) => {
+ const filteredResults = {};
+
+ for (const component in results) {
+ for (const name in results[component]) {
+ const { path, data } = results[component][name];
+
+ if (data && data.status && data.status.stability) {
+ const { stability } = data.status;
+ const statuses = ['unmaintained', 'deprecated'];
+
+ for (const status of statuses) {
+ if (stability[status] && stability[status].length > 0) {
+ if (!filteredResults[status]) {
+ filteredResults[status] = {};
+ }
+ filteredResults[status][name] = { path, stability: data.status.stability, component };
+ }
+ }
+ }
+ }
+ }
+
+ return filteredResults;
+};
+
+async function processComponents() {
+ const results = findFilesByName(`.`, 'metadata.yaml');
+ const resultsClean = processFiles(results)
+ const resultsWithStability = processStatusResults(resultsClean)
+ return resultsWithStability
+
+}
+
+async function main({ github, context }) {
+ debug({msg: "running..."})
+ const octokit = github.rest;
+ const lookbackData = genLookbackDates();
+ const {issuesData, previousReport} = await processIssues({ octokit, context, lookbackData })
+ const componentData = await processComponents()
+
+ const report = generateReport({ issuesData, previousReport, componentData })
+
+ await createIssue({octokit, lookbackData, report, context});
+}
+
+module.exports = async ({ github, context }) => {
+ await main({ github, context })
+}
diff --git a/.github/workflows/scripts/get-codeowners.sh b/.github/workflows/scripts/get-codeowners.sh
index e7bb4ec673cc..8d84b99e7787 100755
--- a/.github/workflows/scripts/get-codeowners.sh
+++ b/.github/workflows/scripts/get-codeowners.sh
@@ -9,24 +9,32 @@
set -euo pipefail
+get_component_type() {
+ echo "${COMPONENT}" | cut -f 1 -d '/'
+}
+
+get_codeowners() {
+ # grep arguments explained:
+ # -m 1: Match the first occurrence
+ # ^: Match from the beginning of the line
+ # ${1}: Insert first argument given to this function
+ # [\/]\?: Match 0 or 1 instances of a forward slash
+ # \s: Match any whitespace character
+ echo "$((grep -m 1 "^${1}[\/]\?\s" .github/CODEOWNERS || true) | \
+ sed 's/ */ /g' | \
+ cut -f3- -d ' ')"
+}
+
if [[ -z "${COMPONENT:-}" ]]; then
echo "COMPONENT has not been set, please ensure it is set."
exit 1
fi
-# grep exits with status code 1 if there are no matches,
-# so we manually set RESULT to 0 if nothing is found.
-RESULT=$(grep -c "${COMPONENT}" .github/CODEOWNERS || true)
+OWNERS="$(get_codeowners "${COMPONENT}")"
-# there may be more than 1 component matching a label
-# if so, try to narrow things down by appending the component
-# type to the label
-if [[ ${RESULT} != 1 ]]; then
- COMPONENT_TYPE=$(echo "${COMPONENT}" | cut -f 1 -d '/')
- COMPONENT="${COMPONENT}${COMPONENT_TYPE}"
+if [[ -z "${OWNERS:-}" ]]; then
+ COMPONENT_TYPE=$(get_component_type "${COMPONENT}")
+ OWNERS="$(get_codeowners "${COMPONENT}${COMPONENT_TYPE}")"
fi
-OWNERS=$( (grep -m 1 "${COMPONENT}" .github/CODEOWNERS || true) | sed 's/ */ /g' | cut -f3- -d ' ' )
-
echo "${OWNERS}"
-
diff --git a/.github/workflows/scripts/ping-codeowners-issues.sh b/.github/workflows/scripts/ping-codeowners-issues.sh
index cce1d37dbe18..a9ce7a875cbe 100755
--- a/.github/workflows/scripts/ping-codeowners-issues.sh
+++ b/.github/workflows/scripts/ping-codeowners-issues.sh
@@ -7,8 +7,8 @@
set -euo pipefail
-if [[ -z "${COMPONENT:-}" || -z "${ISSUE:-}" || -z "${SENDER:-}" ]]; then
- echo "At least one of COMPONENT, ISSUE, or SENDER has not been set, please ensure each is set."
+if [[ -z "${COMPONENT:-}" || -z "${ISSUE:-}" ]]; then
+ echo "Either COMPONENT or ISSUE has not been set, please ensure both are set."
exit 0
fi
@@ -20,9 +20,4 @@ if [[ -z "${OWNERS}" ]]; then
exit 0
fi
-if [[ "${OWNERS}" =~ "${SENDER}" ]]; then
- echo "Label applied by code owner ${SENDER}"
- exit 0
-fi
-
gh issue comment "${ISSUE}" --body "Pinging code owners for ${COMPONENT}: ${OWNERS}. See [Adding Labels via Comments](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/CONTRIBUTING.md#adding-labels-via-comments) if you do not have permissions to add labels yourself."
diff --git a/.github/workflows/scripts/release-prepare-release.sh b/.github/workflows/scripts/release-prepare-release.sh
index 5e4181807a56..23f349b28f1e 100755
--- a/.github/workflows/scripts/release-prepare-release.sh
+++ b/.github/workflows/scripts/release-prepare-release.sh
@@ -50,6 +50,11 @@ git add .
git commit -m "make multimod-sync changes ${CANDIDATE_BETA}" || (echo "no multimod changes to commit")
make gotidy
+
+pushd cmd/otelcontribcol
+go mod tidy
+popd
+
git add .
git commit -m "make gotidy changes ${CANDIDATE_BETA}" || (echo "no gotidy changes to commit")
make otelcontribcol
diff --git a/.github/workflows/scripts/setup_stability_tests.sh b/.github/workflows/scripts/setup_stability_tests.sh
deleted file mode 100755
index 960ce854404e..000000000000
--- a/.github/workflows/scripts/setup_stability_tests.sh
+++ /dev/null
@@ -1,16 +0,0 @@
-#!/bin/bash -ex
-
-# Copyright The OpenTelemetry Authors
-# SPDX-License-Identifier: Apache-2.0
-
-TESTS="$(make -s -C testbed list-stability-tests | xargs echo|sed 's/ /|/g')"
-
-TESTS=(${TESTS//|/ })
-MATRIX="{\"include\":["
-curr=""
-for i in "${!TESTS[@]}"; do
- curr="${TESTS[$i]}"
- MATRIX+="{\"test\":\"$curr\"},"
-done
-MATRIX+="]}"
-echo "stabilitytest_matrix=$MATRIX" >> $GITHUB_OUTPUT
diff --git a/.github/workflows/scripts/verify-dist-files-exist.sh b/.github/workflows/scripts/verify-dist-files-exist.sh
index a487e9c41531..4db111a94c7a 100755
--- a/.github/workflows/scripts/verify-dist-files-exist.sh
+++ b/.github/workflows/scripts/verify-dist-files-exist.sh
@@ -9,6 +9,7 @@ files=(
bin/otelcontribcol_linux_arm64
bin/otelcontribcol_linux_ppc64le
bin/otelcontribcol_linux_amd64
+ bin/otelcontribcol_linux_s390x
bin/otelcontribcol_windows_amd64.exe
dist/otel-contrib-collector-*.aarch64.rpm
dist/otel-contrib-collector_*_amd64.deb
@@ -16,6 +17,8 @@ files=(
dist/otel-contrib-collector_*_arm64.deb
dist/otel-contrib-collector-*.ppc64le.rpm
dist/otel-contrib-collector_*_ppc64le.deb
+ dist/otel-contrib-collector_*_s390x.deb
+ dist/otel-contrib-collector-*.s390x.rpm
# skip. See https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/10113
# dist/otel-contrib-collector-*amd64.msi
diff --git a/.github/workflows/telemetrygen.yml b/.github/workflows/telemetrygen.yml
index 57c49a3441bc..e0e411eef9b9 100644
--- a/.github/workflows/telemetrygen.yml
+++ b/.github/workflows/telemetrygen.yml
@@ -4,7 +4,12 @@ on:
branches: [ main ]
tags:
- 'v[0-9]+.[0-9]+.[0-9]+*'
+ merge_group:
pull_request:
+env:
+ # Make sure to exit early if cache segment download times out after 2 minutes.
+ # We limit cache download as a whole to 5 minutes.
+ SEGMENT_DOWNLOAD_TIMEOUT_MINS: 2
# Do not cancel this workflow on main. See https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/16616
concurrency:
@@ -16,13 +21,38 @@ jobs:
runs-on: ubuntu-latest
if: ${{ github.actor != 'dependabot[bot]' }}
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
+ - uses: actions/setup-go@v5
+ with:
+ go-version: "1.21.7"
+ cache: false
+ - name: Cache Go
+ id: go-cache
+ timeout-minutes: 5
+ uses: actions/cache@v3
+ with:
+ path: |
+ ~/go/bin
+ ~/go/pkg/mod
+ key: go-cache-${{ runner.os }}-${{ hashFiles('**/go.sum') }}
+ - name: Set up QEMU
+ uses: docker/setup-qemu-action@v3
+ - name: Set up Docker Buildx
+ uses: docker/setup-buildx-action@v3
+ - name: Build binaries
+ run: |
+ GOOS=linux GOARCH=ppc64le make telemetrygen
+ GOOS=linux GOARCH=arm64 make telemetrygen
+ GOOS=linux GOARCH=amd64 make telemetrygen
+ GOOS=linux GOARCH=s390x make telemetrygen
+ cp bin/telemetrygen_* cmd/telemetrygen/
- name: Build telemetrygen
- uses: docker/build-push-action@v4
+ uses: docker/build-push-action@v5
with:
context: cmd/telemetrygen
push: false
tags: ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:dev
+ platforms: linux/amd64,linux/arm64,linux/s390x,linux/ppc64le
publish-latest:
runs-on: ubuntu-latest
@@ -30,19 +60,44 @@ jobs:
permissions:
packages: write
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
+ - uses: actions/setup-go@v5
+ with:
+ go-version: "1.21.7"
+ cache: false
+ - name: Cache Go
+ id: go-cache
+ timeout-minutes: 5
+ uses: actions/cache@v3
+ with:
+ path: |
+ ~/go/bin
+ ~/go/pkg/mod
+ key: go-cache-${{ runner.os }}-${{ hashFiles('**/go.sum') }}
+ - name: Set up QEMU
+ uses: docker/setup-qemu-action@v3
+ - name: Set up Docker Buildx
+ uses: docker/setup-buildx-action@v3
- name: Login to GitHub Container Registry
- uses: docker/login-action@v2
+ uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
+ - name: Build binaries
+ run: |
+ GOOS=linux GOARCH=ppc64le make telemetrygen
+ GOOS=linux GOARCH=arm64 make telemetrygen
+ GOOS=linux GOARCH=amd64 make telemetrygen
+ GOOS=linux GOARCH=s390x make telemetrygen
+ cp bin/telemetrygen_* cmd/telemetrygen/
- name: Push telemetrygen to Github packages
- uses: docker/build-push-action@v4
+ uses: docker/build-push-action@v5
with:
context: cmd/telemetrygen
push: true
tags: ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:latest
+ platforms: linux/amd64,linux/arm64,linux/s390x,linux/ppc64le
publish-stable:
runs-on: ubuntu-latest
@@ -50,19 +105,44 @@ jobs:
permissions:
packages: write
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
+ - uses: actions/setup-go@v5
+ with:
+ go-version: "1.21.7"
+ cache: false
+ - name: Cache Go
+ id: go-cache
+ timeout-minutes: 5
+ uses: actions/cache@v3
+ with:
+ path: |
+ ~/go/bin
+ ~/go/pkg/mod
+ key: go-cache-${{ runner.os }}-${{ hashFiles('**/go.sum') }}
+ - name: Set up QEMU
+ uses: docker/setup-qemu-action@v3
+ - name: Set up Docker Buildx
+ uses: docker/setup-buildx-action@v3
- name: Set Release Tag
id: github_tag
run: ./.github/workflows/scripts/set_release_tag.sh
- name: Login to GitHub Container Registry
- uses: docker/login-action@v2
+ uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- - name: Push telemetrygen to Github packages
+ - name: Build binaries
run: |
- docker build cmd/telemetrygen -t ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:$RELEASE_TAG
- docker push ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:$RELEASE_TAG
- env:
- RELEASE_TAG: ${{ steps.github_tag.outputs.tag }}
+ GOOS=linux GOARCH=ppc64le make telemetrygen
+ GOOS=linux GOARCH=arm64 make telemetrygen
+ GOOS=linux GOARCH=amd64 make telemetrygen
+ GOOS=linux GOARCH=s390x make telemetrygen
+ cp bin/telemetrygen_* cmd/telemetrygen/
+ - name: Push telemetrygen to Github packages
+ uses: docker/build-push-action@v5
+ with:
+ context: cmd/telemetrygen
+ push: true
+ tags: ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:${{ steps.github_tag.outputs.tag }}
+ platforms: linux/amd64,linux/arm64,linux/s390x,linux/ppc64le
diff --git a/.github/workflows/tidy-dependencies.yml b/.github/workflows/tidy-dependencies.yml
index 1699d3c5898b..6721f5e27572 100644
--- a/.github/workflows/tidy-dependencies.yml
+++ b/.github/workflows/tidy-dependencies.yml
@@ -5,26 +5,27 @@ on:
branches:
- main
+env:
+ # Make sure to exit early if cache segment download times out after 2 minutes.
+ # We limit cache download as a whole to 5 minutes.
+ SEGMENT_DOWNLOAD_TIMEOUT_MINS: 2
+
jobs:
setup-environment:
- # disabling until permission issues is resolved
- # see: https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/22953
- if: ${{ false }}
timeout-minutes: 30
runs-on: ubuntu-latest
- # if: ${{ contains(github.event.pull_request.labels.*.name, 'dependencies') }}
+ if: ${{ !contains(github.event.pull_request.labels.*.name, 'dependency-major-update') && (github.actor == 'renovate[bot]' || contains(github.event.pull_request.labels.*.name, 'renovatebot')) }}
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
with:
- repository: "renovate-bot/open-telemetry-_-opentelemetry-collector-contrib"
ref: ${{ github.head_ref }}
- token: ${{ secrets.OPENTELEMETRYBOT_GITHUB_TOKEN }}
- - uses: actions/setup-go@v4
+ - uses: actions/setup-go@v5
with:
- go-version: 1.19
+ go-version: "1.21.7"
cache: false
- name: Cache Go
id: go-cache
+ timeout-minutes: 5
uses: actions/cache@v3
with:
path: |
@@ -37,12 +38,15 @@ jobs:
- name: Install Tools
if: steps.go-cache.outputs.cache-hit != 'true'
run: make install-tools
- - name: go mod tidy
+ - name: go mod tidy, make genotelcontribcol and make genoteltestbedcol
run: |
- make gotidy
+ make gotidy && make genotelcontribcol && make genoteltestbedcol
git config user.name opentelemetrybot
git config user.email 107717825+opentelemetrybot@users.noreply.github.com
- echo "git diff --exit-code || (git add . && git commit -m \"go mod tidy\" && git push)"
- git diff --exit-code || (git add . && git commit -m "go mod tidy" && git push)
+ echo "git diff --exit-code || (git add . && git commit -m \"go mod tidy, make genotelcontribcol and make genoteltestbedcol\" && git push)"
+ git diff --exit-code || (git add . && git commit -m "go mod tidy, make genotelcontribcol and make genoteltestbedcol" && git push)
env:
GITHUB_TOKEN: ${{ secrets.OPENTELEMETRYBOT_GITHUB_TOKEN }}
+ - uses: actions-ecosystem/action-remove-labels@v1
+ with:
+ labels: renovatebot
diff --git a/.gitignore b/.gitignore
index fd937a38f1b4..52616038f614 100644
--- a/.gitignore
+++ b/.gitignore
@@ -38,3 +38,4 @@ integration-coverage.html
go.work*
+/result
diff --git a/.golangci.yml b/.golangci.yml
index f60f2ac7e254..16fb19e4b5b3 100644
--- a/.golangci.yml
+++ b/.golangci.yml
@@ -19,6 +19,8 @@ run:
skip-dirs:
- third_party
- local
+ - cmd/otelcontribcol
+ - cmd/oteltestbedcol
# default is true. Enables skipping of directories:
# vendor$, third_party$, testdata$, examples$, Godeps$, builtin$
@@ -85,6 +87,9 @@ linters-settings:
gofmt:
# simplify code: gofmt with `-s` option, true by default
simplify: true
+ rewrite-rules:
+ - pattern: interface{}
+ replacement: any
goimports:
# put imports beginning with prefix after 3rd-party packages;
@@ -123,8 +128,16 @@ linters-settings:
files:
- "!**/*_test.go"
+ exhaustive:
+ explicit-exhaustive-switch: true
+ ignore-enum-members: "pmetric.MetricTypeEmpty"
+
+ predeclared:
+ ignore: copy
+
linters:
enable:
+ - decorder
- depguard
- errcheck
- errorlint
@@ -137,12 +150,15 @@ linters:
- gosec
- govet
- misspell
+ - predeclared
+ - reassign
- revive
- staticcheck
- tenv
- unconvert
- unparam
- unused
+ - wastedassign
issues:
# Excluding configuration per-path, per-linter, per-text and per-source
@@ -154,146 +170,6 @@ issues:
- text: "G402:"
linters:
- gosec
- # Following exclude-rules are used to exclude the existing components which do not pass exhaustive lint,
- # in order to enable the exhaustive lint check.
- # We should not add more exclude-rules.
- # The progress of solving existing exclude-rules will be tracked in https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/23266
- - path: fluentforwardreceiver
- linters:
- - exhaustive
- - path: prometheusreceiver
- linters:
- - exhaustive
- - path: deltatorateprocessor
- linters:
- - exhaustive
- - path: groupbyattrsprocessor
- linters:
- - exhaustive
- - path: filterprocessor
- linters:
- - exhaustive
- - path: metricsgenerationprocessor
- linters:
- - exhaustive
- - path: metricstransformprocessor
- linters:
- - exhaustive
- - path: probabilisticsamplerprocessor
- linters:
- - exhaustive
- - path: servicegraphprocessor
- linters:
- - exhaustive
- - path: spanprocessor
- linters:
- - exhaustive
- - path: resourcedetectionprocessor
- linters:
- - exhaustive
- - path: tailsamplingprocessor
- linters:
- - exhaustive
- - path: transformprocessor
- linters:
- - exhaustive
- - path: alibabacloudlogserviceexporter
- linters:
- - exhaustive
- - path: awsemfexporter
- linters:
- - exhaustive
- - path: awsxrayexporter
- linters:
- - exhaustive
- - path: azuremonitorexporter
- linters:
- - exhaustive
- - path: azuredataexplorerexporter
- linters:
- - exhaustive
- - path: carbonexporter
- linters:
- - exhaustive
- - path: coralogixexporter
- linters:
- - exhaustive
- - path: datasetexporter
- linters:
- - exhaustive
- - path: elasticsearchexporter
- linters:
- - exhaustive
- - path: googlecloudpubsubexporter
- linters:
- - exhaustive
- - path: instanaexporter
- linters:
- - exhaustive
- - path: jaegerthrifthttpexporter
- linters:
- - exhaustive
- - path: logzioexporter
- linters:
- - exhaustive
- - path: prometheusexporter
- linters:
- - exhaustive
- - path: prometheusremotewriteexporter
- linters:
- - exhaustive
- - path: skywalkingexporter
- linters:
- - exhaustive
- - path: splunkhecexporter
- linters:
- - exhaustive
- - path: tanzuobservabilityexporter
- linters:
- - exhaustive
- - path: k8sobserver
- linters:
- - exhaustive
- - path: containerinsight
- linters:
- - exhaustive
- - path: filter
- linters:
- - exhaustive
- - path: coreinternal
- linters:
- - exhaustive
- - path: k8sconfig
- linters:
- - exhaustive
- - path: ottl
- linters:
- - exhaustive
- - path: resourcetotelemetry
- linters:
- - exhaustive
- - path: jaeger
- linters:
- - exhaustive
- - path: prometheus
- linters:
- - exhaustive
- - path: loki
- linters:
- - exhaustive
- - path: opencensus
- linters:
- - exhaustive
- - path: zipkin
- linters:
- - exhaustive
- - path: configschema
- linters:
- - exhaustive
- - path: testbed
- linters:
- - exhaustive
- # Preserve original source code in attributed package.
- - path: ctimefmt
+ - text: "SA1019: \"github.com/open-telemetry/opentelemetry-collector-contrib/cmd/configschema"
linters:
- - revive
+ - staticcheck
diff --git a/CHANGELOG-API.md b/CHANGELOG-API.md
new file mode 100644
index 000000000000..ffc171048c49
--- /dev/null
+++ b/CHANGELOG-API.md
@@ -0,0 +1,270 @@
+
+
+# GO API Changelog
+
+This changelog includes only developer-facing changes.
+If you are looking for user-facing changes, check out [CHANGELOG.md](./CHANGELOG.md).
+
+
+
+## v0.95.0
+
+### 🛑 Breaking changes 🛑
+
+- `pkg/stanza`: Remove deprecated pkg/stanza/attrs (#30449)
+- `httpforwarderextension`: Rename the extension httpforwarder to httpforwarderextension (#24171)
+- `extension/storage`: The `filestorage` and `dbstorage` extensions are now standalone modules. (#31040)
+ If using the OpenTelemetry Collector Builder, you will need to update your import paths to use the new module(s).
+ - `github.com/open-telemetry/opentelemetry-collector-contrib/extension/storage/filestorage`
+ - `github.com/open-telemetry/opentelemetry-collector-contrib/extension/storage/dbstorage`
+
+
+### 💡 Enhancements 💡
+
+- `pkg/golden`: Added an option to skip the metric timestamp normalization for WriteMetrics. (#30919)
+- `healthcheckextension`: Remove usage of deprecated `host.ReportFatalError` (#30582)
+
+## v0.94.0
+
+### 🚩 Deprecations 🚩
+
+- `testbed`: Deprecate testbed.GetAvailablePort in favor of testutil.GetAvailablePort (#30811)
+ Move healthcheckextension to use testutil.GetAvailablePort
+
+### 🚀 New components 🚀
+
+- `pkg_sampling`: Package of code for parsing OpenTelemetry tracestate probability sampling fields. (#29738)
+
+## v0.93.0
+
+### 🛑 Breaking changes 🛑
+
+- `testbed`: Remove unused AWS XRay mock receiver (#30381)
+- `docker`: Adopt api_version as strings to correct invalid float truncation (#24025)
+- `prometheusreceiver`: Consolidate Config members and remove the need of placeholders. (#29901)
+- `all`: Remove obsolete "// +build" directive (#30651)
+- `testbed`: Expand TestCase capabilities with broken out LoadGenerator interface (#30303)
+
+### 🚩 Deprecations 🚩
+
+- `pkg/stanza`: Deprecate pkg/stanza/attrs package in favor of pkg/stanza/fileconsumer/attrs (#30449)
+
+### 💡 Enhancements 💡
+
+- `testbed`: Adds and adopts new WithEnvVar child process option, moving GOMAXPROCS=2 to initializations (#30491)
+
+## v0.92.0
+
+### 🛑 Breaking changes 🛑
+
+- `carbonexporter`: Change Config member names (#29862)
+- `carbonreceiver`: Hide unnecessary public API (#29895)
+- `pkg/ottl`: Unexport `ADD`, `SUB`, `MULT`, `DIV`, `EQ`, `NE`, `LT`, `LTE`, `GT`, and `GTE` (#29925)
+- `pkg/ottl`: Change `Path` to be an Interface instead of the grammar struct. (#29897)
+ Affects creators of custom contexts.
+- `golden`: Use testing.TB for golden.WriteMetrics, golden.WriteTraces and golden.WriteLogs over *testing.T (#30277)
+
+### 💡 Enhancements 💡
+
+- `kafkaexporter`: add ability to publish kafka messages with message key of TraceID - it will allow partitioning of the kafka Topic. (#12318)
+- `kafkaexporter`: Adds the ability to configure the Kafka client's Client ID. (#30144)
+
+## v0.91.0
+
+### 🛑 Breaking changes 🛑
+
+- `pkg/ottl`: Rename `Statements` to `StatementSequence`. Remove `Eval` function from `StatementSequence`, use `ConditionSequence` instead. (#29598)
+
+### 💡 Enhancements 💡
+
+- `pkg/ottl`: Add `ConditionSequence` for evaluating lists of conditions (#29339)
+
+## v0.90.0
+
+### 🛑 Breaking changes 🛑
+
+- `clickhouseexporter`: Replace `Config.QueueSettings` field with `exporterhelper.QueueSettings` and remove `QueueSettings` struct (#27653)
+- `kafkareceiver`: Do not export the function `WithTracesUnmarshalers`, `WithMetricsUnmarshalers`, `WithLogsUnmarshalers` (#26304)
+
+### 💡 Enhancements 💡
+
+- `datadogreceiver`: The datadogreceiver supports the new datadog protocol that is sent by the datadog agent API/v0.2/traces. (#27045)
+- `pkg/ottl`: Add ability to independently parse OTTL conditions. (#29315)
+
+### 🧰 Bug fixes 🧰
+
+- `cassandraexporter`: Exist check for keyspace and dynamic timeout (#27633)
+
+## v0.89.0
+
+### 🛑 Breaking changes 🛑
+
+- `carbonreceiver`: Do not export function New and pass checkapi. (#26304)
+- `collectdreceiver`: Move to use confighttp.HTTPServerSettings (#28811)
+- `kafkaexporter`: Do not export function WithTracesMarshalers, WithMetricsMarshalers, WithLogsMarshalers and pass checkapi (#26304)
+- `remoteobserverprocessor`: Rename remoteobserverprocessor to remotetapprocessor (#27873)
+
+### 💡 Enhancements 💡
+
+- `extension/encoding`: Introduce interfaces for encoding extensions. (#28686)
+- `exporter/awss3exporter`: This feature allows role assumption for s3 exportation. It is especially useful on Kubernetes clusters that are using IAM roles for service accounts (#28674)
+
+## v0.88.0
+
+### 🚩 Deprecations 🚩
+
+- `pkg/stanza`: Deprecate 'flush.WithPeriod'. Use 'flush.WithFunc' instead. (#27843)
+
+## v0.87.0
+
+### 🛑 Breaking changes 🛑
+
+- `exporter/kafka, receiver/kafka, receiver/kafkametrics`: Move configuration parts to an internal pkg (#27093)
+- `pulsarexporter`: Do not export function WithTracesMarshalers, add test for that and pass checkapi (#26304)
+- `pulsarreceiver`: Do not export the functions `WithLogsUnmarshalers`, `WithMetricsUnmarshalers`, `WithTracesUnmarshalers`, add tests and pass checkapi. (#26304)
+
+### 💡 Enhancements 💡
+
+- `mdatagen`: allows adding warning section to resource_attribute configuration (#19174)
+- `mdatagen`: allow setting empty metric units (#27089)
+
+## v0.86.0
+
+### 🛑 Breaking changes 🛑
+
+- `azuremonitorexporter`: Unexport `Accept` to comply with checkapi (#26304)
+- `tailsamplingprocessor`: Unexport `SamplingProcessorMetricViews` to comply with checkapi (#26304)
+- `awskinesisexporter`: Do not export the functions `NewTracesExporter`, `NewMetricsExporter`, `NewLogsExporter` and pass checkapi. (#26304)
+- `dynatraceexporter`: Rename struct to keep expected `exporter.Factory` and pass checkapi. (#26304)
+- `ecsobserver`: Do not export the function `DefaultConfig` and pass checkapi. (#26304)
+- `f5cloudexporter`: Do not export the function `NewFactoryWithTokenSourceGetter` and pass checkapi. (#26304)
+- `fluentforwardreceiver`: rename `Logs` and `DetermineNextEventMode` functions and all usage to lowercase to stop exporting method and pass checkapi (#26304)
+- `groupbyattrsprocessor`: Do not export the function `MetricViews` and pass checkapi. (#26304)
+- `groupbytraceprocessor`: Do not export the function `MetricViews` and pass checkapi. (#26304)
+- `jaegerreceiver`: Do not export the function `DefaultServerConfigUDP` and pass checkapi. (#26304)
+- `lokiexporter`: Do not export the function `MetricViews` and pass checkapi. (#26304)
+- `mongodbatlasreceiver`: Rename struct to pass checkapi. (#26304)
+- `mongodbreceiver`: Do not export the function `NewClient` and pass checkapi. (#26304)
+- `mysqlreceiver`: Do not export the function `Query` (#26304)
+- `pkg/ottl`: Remove support for `ottlarg`. The struct's field order is now the function parameter order. (#25705)
+- `pkg/stanza`: Make trim func composable (#26536)
+ - Adds trim.WithFunc to allow trim funcs to wrap bufio.SplitFuncs.
+ - Removes trim.Func from split.Config.Func. Use trim.WithFunc instead.
+ - Removes trim.Func from flush.WithPeriod. Use trim.WithFunc instead.
+
+- `pkg/stanza`: Rename syslog and tcp MultilineBuilders (#26631)
+ - Rename syslog.OctetMultiLineBuilder to syslog.OctetSplitFuncBuilder
+ - Rename tc.MultilineBuilder to tcp.SplitFuncBuilder
+
+- `probabilisticsamplerprocessor`: Do not export the function `SamplingProcessorMetricViews` and pass checkapi. (#26304)
+- `sentryexporter`: Do not export the functions `CreateSentryExporter` and pass checkapi. (#26304)
+- `sumologicexporter`: Do not export the function `CreateDefaultHTTPClientSettings` and pass checkapi. (#26304)
+
+### 💡 Enhancements 💡
+
+- `pkg/ottl`: Add support for optional parameters (#20879)
+ The new `ottl.Optional` type can now be used in a function's `Arguments` struct
+ to indicate that a parameter is optional.
+
+
+## v0.85.0
+
+### 🛑 Breaking changes 🛑
+
+- `alibabacloudlogserviceexporter`: Do not export the function `NewLogServiceClient` (#26304)
+- `awss3exporter`: Do not export the function `NewMarshaler` (#26304)
+- `statsdreceiver`: rename and do not export function `New` to `newReceiver` to pass checkapi (#26304)
+- `chronyreceiver`: Removes duplicate `Timeout` field. This change has no impact on end users of the component. (#26113)
+- `dockerstatsreceiver`: Removes duplicate `Timeout` field. This change has no impact on end users of the component. (#26114)
+- `elasticsearchexporter`: Do not export the function `DurationAsMicroseconds` (#26304)
+- `jaegerexporter`: Do not export the function `MetricViews` (#26304)
+- `k8sobjectsreceiver`: Do not export the function `NewTicker` (#26304)
+- `pkg/stanza`: Rename 'pkg/stanza/decoder' to 'pkg/stanza/decode' (#26340)
+- `pkg/stanza`: Move tokenize.SplitterConfig.Encoding to fileconsumer.Config.Encoding (#26511)
+- `pkg/stanza`: Remove Flusher from tokenize.SplitterConfig (#26517)
+ Removes the following in favor of flush.WithPeriod - tokenize.DefaultFlushPeriod - tokenize.FlusherConfig - tokenize.NewFlusherConfig
+- `pkg/stanza`: Remove tokenize.SplitterConfig (#26537)
+- `pkg/stanza`: Rename "tokenize" package to "split" (#26540)
+ - Remove 'Multiline' struct
+ - Remove 'NewMultilineConfig' struct
+ - Rename 'MultilineConfig' to 'split.Config'
+
+- `pkg/stanza`: Extract whitespace trim configuration into trim.Config (#26511)
+ - PreserveLeading and PreserveTrailing removed from tokenize.SplitterConfig.
+ - PreserveLeadingWhitespaces and PreserveTrailingWhitespaces removed from tcp.BaseConfig and udp.BaseConfig.
+
+
+### 💡 Enhancements 💡
+
+- `oauth2clientauthextension`: Enable dynamically reading ClientID and ClientSecret from files (#26117)
+ - Read the client ID and/or secret from a file by specifying the file path to the ClientIDFile (`client_id_file`) and ClientSecretFile (`client_secret_file`) fields respectively.
+ - The file is read every time the client issues a new token. This means that the corresponding value can change dynamically during the execution by modifying the file contents.
+
+
+## v0.84.0
+
+### 🛑 Breaking changes 🛑
+
+- `memcachedreceiver`: Removes duplicate `Timeout` field. This change has no impact on end users of the component. (#26084)
+- `podmanreceiver`: Removes duplicate `Timeout` field. This change has no impact on end users of the component. (#26083)
+- `zookeeperreceiver`: Removes duplicate `Timeout` field. This change has no impact on end users of the component. (#26082)
+- `jaegerreceiver`: Deprecate remote_sampling config in the jaeger receiver (#24186)
+ The jaeger receiver will fail to start if remote_sampling config is specified in it. The `receiver.jaeger.DisableRemoteSampling` feature gate can be set to let the receiver start and treat remote_sampling config as no-op. In a future version this feature gate will be removed and the receiver will always fail when remote_sampling config is specified.
+
+- `pkg/ottl`: use IntGetter argument for Substring function (#25852)
+- `pkg/stanza`: Remove deprecated 'helper.Encoding' and 'helper.EncodingConfig.Build' (#25846)
+- `pkg/stanza`: Remove deprecated fileconsumer config structs (#24853)
+ Includes | - MatchingCriteria - OrderingCriteria - NumericSortRule - AlphabeticalSortRule - TimestampSortRule
+- `googlecloudexporter`: remove retry_on_failure from the googlecloud exporter. The exporter itself handles retries, and retrying can cause issues. (#57233)
+
+### 🚩 Deprecations 🚩
+
+- `pkg/stanza`: Deprecate 'helper.EncodingConfig' and 'helper.NewEncodingConfig' (#25846)
+- `pkg/stanza`: Deprecate encoding related elements of helper pacakge, in favor of new decoder package (#26019)
+ Includes the following deprecations | - Decoder - NewDecoder - LookupEncoding - IsNop
+- `pkg/stanza`: Deprecate tokenization related elements of helper pacakge, in favor of new tokenize package (#25914)
+ Includes the following deprecations | - Flusher - FlusherConfig - NewFlusherConfig - Multiline - MultilineConfig - NewMultilineConfig - NewLineStartSplitFunc - NewLineEndSplitFunc - NewNewlineSplitFunc - Splitter - SplitterConfig - NewSplitterConfig - SplitNone
+
+### 💡 Enhancements 💡
+
+- `googlemanagedprometheus`: Add a `add_metric_suffixes` option to the googlemanagedprometheus exporter. When set to false, metric suffixes are not added. (#26071)
+- `receiver/prometheus`: translate units from prometheus to UCUM (#23208)
+
+### 🧰 Bug fixes 🧰
+
+- `receiver/influxdb`: add allowable inputs to line protocol precision parameter (#24974)
+
+## v0.83.0
+
+### 🛑 Breaking changes 🛑
+
+- `exporter/clickhouse`: Change the type of `Config.Password` to be `configopaque.String` (#17273)
+- `all`: Remove go 1.19 support, bump minimum to go 1.20 and add testing for 1.21 (#8207)
+- `solacereceiver`: Move model package to the internal package (#24890)
+- `receiver/statsdreceiver`: Move protocol and transport packages to internal (#24892)
+- `filterprocessor`: Unexport `Strict` and `Regexp` (#24845)
+- `mdatagen`: Rename the mdatagen sum field `aggregation` to `aggregation_temporality` (#16374)
+- `metricstransformprocessor`: Unexport elements of the Go API of the processor (#24846)
+- `mezmoexporter`: Unexport the `MezmoLogLine` and `MezmoLogBody` structs (#24842)
+- `pkg/stanza`: Remove deprecated 'fileconsumer.FileAttributes' (#24688)
+- `pkg/stanza`: Remove deprecated 'fileconsumer.EmitFunc' (#24688)
+- `pkg/stanza`: Remove deprecated `fileconsumer.Finder` (#24688)
+- `pkg/stanza`: Remove deprecated `fileconsumer.BaseSortRule` and `fileconsumer.SortRuleImpl` (#24688)
+- `pkg/stanza`: Remove deprecated 'fileconsumer.Reader' (#24688)
+
+### 🚩 Deprecations 🚩
+
+- `pkg/stanza`: Deprecate helper.Encoding and helper.EncodingConfig.Build (#24980)
+- `pkg/stanza`: Deprecate fileconsumer MatchingCriteria in favor of new matcher package (#24853)
+
+### 💡 Enhancements 💡
+
+- `changelog`: Generate separate changelogs for end users and package consumers (#24014)
+- `splunkhecexporter`: Add heartbeat check while startup and new config param, heartbeat/startup (defaults to false). This is different than the healtcheck_startup, as the latter doesn't take token or index into account. (#24411)
+- `k8sclusterreceiver`: Allows disabling metrics and resource attributes (#24568)
+- `cmd/mdatagen`: Avoid reusing the same ResourceBuilder instance for multiple ResourceMetrics (#24762)
+
+### 🧰 Bug fixes 🧰
+
+- `splunkhecreceiver`: aligns success resp body w/ splunk enterprise (#19219)
+ changes resp from plaintext "ok" to json {"text":"success", "code":0}
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 2f4dbb015c5f..6fa293f713dd 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -2,8 +2,1208 @@
# Changelog
+Starting with version v0.83.0, this changelog includes only user-facing changes.
+If you are looking for developer-facing changes, check out [CHANGELOG-API.md](./CHANGELOG-API.md).
+
+## v0.95.0
+
+### 🛑 Breaking changes 🛑
+
+- `all`: Bump minimum version to go 1.21 (#31105)
+- `receiver/elasticsearch`: Remove receiver.elasticsearch.emitNodeVersionAttr feature gate (#31221)
+- `receiver/mongodb`: Bump receiver.mongodb.removeDatabaseAttr feature gate to beta (#31212)
+- `splunkenterprisereceiver`: adds additional metrics specific to indexers (#30704)
+- `exporter/datadogexporter`: Disable APM stats computation in Datadog Exporter by default, `exporter.datadogexporter.DisableAPMStats` is changed to beta (#31219)
+- `extension/storage`: The `filestorage` and `dbstorage` extensions are now standalone modules. (#31040)
+ If using the OpenTelemetry Collector Builder, you will need to update your import paths to use the new module(s).
+ - `github.com/open-telemetry/opentelemetry-collector-contrib/extension/storage/filestorage`
+ - `github.com/open-telemetry/opentelemetry-collector-contrib/extension/storage/dbstorage`
+
+
+### 🚩 Deprecations 🚩
+
+- `f5cloudexporter`: deprecating component that is no longer maintained (#31186)
+
+### 🚀 New components 🚀
+
+- `confmap/secretsmanagerprovider`: Initial implementation of secrets manager provider. Allows fetch variables from AWS Secrets Manager (#19368)
+- `deltatocumulative`: adds processor to convert sums (initially) from delta to cumulative temporality (#30705)
+
+### 💡 Enhancements 💡
+
+- `hostmetricsreceiver`: Add a new optional resource attribute `process.cgroup` to the `process` scraper of the `hostmetrics` receiver. (#29282)
+- `datadogexporter`: Adds support for stable JVM metrics introduced in opentelemetry-java-instrumentation v2.0.0 (#31194)
+ See https://github.com/DataDog/opentelemetry-mapping-go/pull/265 for details.
+- `datasetexporter`: Release resources if they haven't been used for some time. (#31292)
+- `datadogconnector`: Add a trace config `peer_tags` on supplementary peer tags on APM stats. (#31158)
+- `datadogexporter`: Add a trace config `peer_tags` on supplementary peer tags on APM stats. (#31158)
+- `awss3exporter`: Add a marshaler that stores the body of log records in s3. (#30318)
+- `pkg/ottl`: Adds a new ParseCSV converter that can be used to parse CSV strings. (#30921)
+- `loadbalancingexporter`: Add benchmarks for Metrics and Traces (#30915)
+- `pkg/ottl`: Add support to specify the format for a replacement string (#27820)
+- `pkg/ottl`: Add `ParseKeyValue` function for parsing key value pairs from a target string (#30998)
+- `receivercreator`: Remove use of `ReportFatalError` (#30596)
+- `processor/tail_sampling`: Add metrics that measure the number of sampled spans and the number of spans that are dropped due to sampling decisions. (#30482)
+- `exporter/signalfx`: Send histograms in otlp format with new config `send_otlp_histograms` option (#26298)
+- `receiver/signalfx`: Accept otlp protobuf requests when content-type is "application/x-protobuf;format=otlp" (#26298)
+- `signalfxreceiver`: Remove deprecated use of `host.ReportFatalError` (#30598)
+- `syslogexporter`: Adding support for sending rfc6587 octet counts in syslog messages (#31013)
+- `connector/datadogconnector`: Internal telemetry metrics for the Datadog traces exporter are now reported through the Collector's self-telemetry (#31179)
+ - These internal metrics may be dropped or change name without prior notice
+
+- `exporter/datadogexporter`: Internal telemetry metrics for the Datadog traces exporter are now reported through the Collector's self-telemetry (#31179)
+ - These internal metrics may be dropped or change name without prior notice
+
+
+### 🧰 Bug fixes 🧰
+
+- `pkg/stanza`: Add 'allow_skip_pri_header' flag to syslog setting. (#30397)
+ Allow parsing syslog records without PRI header. Currently pri header is beng enforced although it's not mandatory by the RFC standard. Since influxdata/go-syslog is not maintained we had to switch to haimrubinstein/go-syslog.
+
+- `datadogexporter`: Fix bug where multiple resources would cause datadogexporter to send extraneous additional stats buckets. (#31173)
+- `extension/storage`: Ensure fsync is turned on after compaction (#20266)
+- `logstransformprocessor`: Fix potential panic on shutdown due to incorrect shutdown order (#31139)
+- `logicmonitorexporter`: Fix memory leak on shutdown (#31150)
+- `opencensusreceiver`: Fix memory leak on shutdown (#31152)
+- `receiver/prometheusreceiver`: prometheusreceiver fix translation of metrics with _created suffix (#30309)
+- `pkg/stanza`: Fixed a bug in the keyvalue_parser where quoted values could be split if they contained a delimited. (#31034)
+
+## v0.94.0
+
+### 🛑 Breaking changes 🛑
+
+- `servicegraphprocessor`: removed deprecated component, use the servicegraph connector instead. (#26091)
+- `datadogconnector`: Enable feature gate `connector.datadogconnector.performance` by default. (#30829)
+ See https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/connector/datadogconnector#feature-gate-for-performance for caveats of this feature gate.
+- `datadogprocessor`: Delete datadogprocessor which has been deprecated since v0.84.0 (#31026)
+ Use datadogconnector instead.
+- `kafkareceiver`: standardizes the default topic name for metrics and logs receivers to the same topic name as the metrics and logs exporters of the kafkaexporter (#27292)
+ If you are using the Kafka receiver in a logs and/or a metrics pipeline
+ and you are not customizing the name of the topic to read from with the `topic` property,
+ the receiver will now read from `otlp_logs` or `otlp_metrics` topic instead of `otlp_spans` topic.
+ To maintain previous behavior, set the `topic` property to `otlp_spans`.
+
+- `pkg/stanza`: Entries are no longer logged during error conditions. (#26670)
+ This change is being made to ensure sensitive information contained in logs are never logged inadvertently.
+ This change is a breaking change because it may change user expectations. However, it should require
+ no action on the part of the user unless they are relying on logs from a few specific error cases.
+
+- `pkg/stanza`: Invert recombine operator's 'overwrite_with' default value. (#30783)
+ Previously, the default value was `oldest`, meaning that the recombine operator _should_ emit the
+ first entry from each batch (with the recombined field). However, the actual behavior was inverted.
+ This fixes the bug but also inverts the default setting so as to effectively cancel out the bug fix
+ for users who were not using this setting. For users who were explicitly setting `overwrite_with`,
+ this corrects the intended behavior.
+
+
+### 🚩 Deprecations 🚩
+
+- `skywalkingexporter`: Mark the component as unmaintained. If we don't find new maintainers, it will be deprecated and removed. (#23796)
+
+### 🚀 New components 🚀
+
+- `sumologicextension`: add configuration and readme (#29601)
+- `failoverconnector`: Refactor of connector to seperate concerns between managing indexes and core failover component (#20766)
+- `otelarrow`: Skeleton of new OpenTelemetry Protocol with Apache Arrow Receiver (#26491)
+- `processor/interval`: Adds the initial structure for a new processor that aggregates metrics and periodically forwards the latest values to the next component in the pipeline. (#29461)
+ As per the CONTRIBUTING.md recommendations, this PR only creates the basic structure of the processor. The concrete implementation will come as a separate followup PR
+
+
+### 💡 Enhancements 💡
+
+- `receiver/journald`: add a new config option "all" that turns on full output from journalctl, including lines that are too long. (#30920)
+- `pkg/stanza`: Add support in a header configuration for json array parser. (#30321)
+- `awss3exporter`: Add the ability to export trace/log/metrics in OTLP ProtoBuf format. (#30682)
+- `dockerobserver`: Upgrading Docker API version default from 1.22 to 1.24 (#30900)
+- `filterprocessor`: move metrics from OpenCensus to OpenTelemetry (#30736)
+- `groupbyattrsprocessor`: move metrics from OpenCensus to OpenTelemetry (#30763)
+- `datadogconnector`: Add trace configs that mirror datadog exporter (#30787)
+ ignore_resources: disable certain traces based on their resource name
+ span_name_remappings: map of datadog span names and preferred name to map to
+ span_name_as_resource_name: use OTLP span name as datadog operation name
+ compute_stats_by_span_kind: enables an additional stats computation check based on span kind
+ peer_tags_aggregation: enables aggregation of peer related tags
+ trace_buffer: specifies the buffer size for datadog trace payloads
+
+- `elasticsearchexporter`: Add `mapping.mode: raw` configuration option (#26647)
+ Setting `mapping.mode: raw` in the Elasticsearch exporter's configuration
+ will result logs and traces being indexed into Elasticsearch with their
+ attribute fields directly at the root level of the document instead inside an
+ `Attributes` object. Similarly, this setting will also result in traces being
+ indexed into Elasticsearch with their event fields directly at the root level
+ of the document instead of inside an `Events` object.
+
+- `loadbalancingexporter`: Optimize metrics and traces export (#30141)
+- `gitproviderreceiver`: Add pull request metrics (#22028)
+ - git.repository.pull_request.open.count
+ - git.repository.pull_request.open.time
+ - git.repository.pull_request.merged.count
+ - git.repository.pull_request.merged.time
+ - git.repository.pull_request.approved.time
+
+- `all`: Add `component.UseLocalHostAsDefaultHost` feature gate that changes default endpoints from 0.0.0.0 to localhost (#30702)
+ This change affects the following components:
+ - extension/awsproxy
+ - extension/health_check
+ - extension/jaegerremotesampling
+ - internal/aws/proxy
+ - processor/remotetap
+ - receiver/awsfirehose
+ - receiver/awsxray
+ - receiver/influxdb
+ - receiver/jaeger
+ - receiver/loki
+ - receiver/opencensus
+ - receiver/sapm
+ - receiver/signalfx
+ - receiver/skywalking
+ - receiver/splunk_hec
+ - receiver/zipkin
+ - receiver/zookeeper
+
+- `googlepubsubreceiver`: Add support for GoogleCloud logging encoding (#29299)
+- `processor/resourcedetectionprocessor`: Detect Azure cluster name from IMDS metadata (#26794)
+- `processor/transform`: Add `copy_metric` function to allow duplicating a metric (#30846)
+
+### 🧰 Bug fixes 🧰
+
+- `basicauthextension`: Accept empty usernames. (#30470)
+ Per https://datatracker.ietf.org/doc/html/rfc2617#section-2, username and password may be empty strings ("").
+ The validation used to enforce that usernames cannot be empty.
+
+- `servicegraphconnector`: update prefix to match the component type (#31023)
+- `datadog/connector`: Create a separate connector in the Datadog connector for the trace-to-metrics and trace-to-trace pipelines. It should reduce the number of conversions we do and help with Datadog connector performance. (#30828)
+ Simplify datadog/connector with two separate connectors in trace-to-metrics pipeline and trace-to-trace pipeline.
+- `datadogreceiver`: Set AppVersion to allow Datadog version property to transform properly to service.version resource attribute (#30225)
+- `cmd/opampsupervisor`: Fix memory leak on shutdown (#30438)
+- `exporter/datadog`: Fixes a bug where empty histograms were not being sent to the backend in the distributions mode. (#31019)
+- `pkg/ottl`: Fix parsing of string escapes in OTTL (#23238)
+- `pkg/stanza`: Recombine operator should always recombine partial logs (#30797)
+ Previously, certain circumstances could result in partial logs being emitted without any
+ recombiniation. This could occur when using `is_first_entry`, if the first partial log from
+ a source was emitted before a matching "start of log" indicator was found. This could also
+ occur when the collector was shutting down.
+
+- `pkg/stanza`: Fix bug where recombine operator's 'overwrite_with' condition was inverted. (#30783)
+- `exporter/signalfx`: Use "unknown" value for the environment correlation calls as fallback. (#31052)
+ This fixed the APM/IM correlation in the Splunk Observability UI for the users that send traces with no "deployment.environment" resource attribute value set.
+- `namedpipereceiver`: Fix SIGSEGV when named pipe creation fails (#31088)
+
+## v0.93.0
+
+### 🛑 Breaking changes 🛑
+
+- `azuremonitorexporter`: Fixed an issue where span attributes with double and int values were incorrectly added to the `measurements` field in the Application Insights schema. These attributes are now correctly placed in the `properties` field. (#29683)
+- `vcenterreceiver`: Bump "receiver.vcenter.emitPerfMetricsWithObjects" feature gate (#30615)
+- `docker`: Adopt api_version as strings to correct invalid float truncation (#24025)
+- `extension/filestorage`: Replace path-unsafe characters in component names (#3148)
+ The feature gate `extension.filestorage.replaceUnsafeCharacters` is now enabled by default.
+ See the File Storage extension's README for details.
+
+- `postgresqlreceiver`: add schema attribute to postgresqlreceiver (#29559)
+ Adds a new resource attribute to the PSQL receiver to store the schema of the table or index
+ Existing table attributes are adjusted to not include the schema, which was inconsistently used
+
+
+### 🚩 Deprecations 🚩
+
+- `mdatagen`: Deprecate mdatagen in preparation for its move to opentelemetry-collector (#30497)
+
+### 🚀 New components 🚀
+
+- `solarwindsapmsettingsextension`: added configuration and readme (#27668)
+- `alertmanagerexporter`: Add Alertmanager exporter to builder config (#23569)
+- `otelarrow`: Skeleton of new OpenTelemetry Protocol with Apache Arrow Exporter. (#26491)
+- `osqueryreceiver`: Adds osquery receiver skeleton (#30375)
+
+### 💡 Enhancements 💡
+
+- `pkg/stanza`: Add a json array parser operator and an assign keys transformer. (#30321)
+ Json array parser opreator can be used to parse a json array string input into a list of objects. |
+ Assign keys transformer can be used to assigns keys from the configuration to an input list
+
+- `splunkhecexporter`: Batch data according to access token and index, if present. (#30404)
+- `awscloudwatchlogsexporter`: Add instrumentation scope in log records exported to CloudWatch logs (#30316, #29884)
+- `cassandraexporter`: added authorization by username and password (#27827)
+- `lokiexporter`: migrate metrics to use OpenTelemetry (#30170)
+- `cmd/telemetrygen`: This updates telemetrygen to create multiple child spans per trace and enhances the tool's functionality for load testing the remote tracing backend. (#30687)
+- `cmd/telemetrygen`: This updates telemetrygen with TLS/mTLS options to test the security of telemetry ingestion services and infrastructure for secure communication. To illustrate the usage, a new example, secure-tracing is added to examples collection. (#29681)
+- `k8sattributesprocessor`: Apply lifecycle tests to k8sprocessor, change its behavior to report fatal error (#30387)
+- `k8sclusterreceiver`: add new disabled os.description, k8s.container_runtime.version resource attributes (#30342)
+- `k8sclusterreceiver`: add os.type resource attribute (#30342)
+- `kubeletstatsreceiver`: Add new `*.cpu.usage` metrics. (#25901)
+- `oidcauthextension`: Move validation logic outside of the extension creation, to the configuration validation (#30460)
+- `datadogexporter`: Add support for setting host tags via host metadata. (#30680)
+ When the `datadog.host.use_as_metadata` resource attribute is set to `true`:
+ - Nonempty string-value resource attributes starting with `datadog.host.tag.` will be added as host tags for the host associated with the resource.
+ - deployment.environment and k8s.cluster.name as mapped to Datadog names and added as host tags for the host associated with the resource.
+
+- `opensearchexporter`: added opensearch exporter to the contrib distribution metadata (#30183)
+- `pkg/ottl`: Add `flatten` function for flattening maps (#30455)
+- `redisreciever`: adds metric for slave_repl_offset (#6942)
+ also adds a shell script to set up docker-compose integration test
+- `exporter/datadogexporter`: Add kafka metrics mapping. This allows users of the JMX Receiver/JMX Metrics Gatherer and kafka metrics receiver to have access to the OOTB kafka Dashboard. (#30731)
+- `receiver/sqlquery`: Add debug log when running SQL query (#29672)
+- `cmd/opampsupervisor`: Use a bootstrapping flow to get the Collector's agent description. (#21071)
+
+### 🧰 Bug fixes 🧰
+
+- `receiver/filelog`: fix panic after upgrading from v0.71.0 when using storage (#30235)
+- `clickhouseexporter`: Fix clickhouse exporter insert metrics data bug (#30210)
+- `prometheusremotewriteexporter`: Check if the context was canceled by a timeout in the component level to avoid unnecessary retries. (#30308)
+- `elasticsearchreceifver`: Fix nil panic on non-linux systems (#30140)
+- `kafkareceiver`: The Kafka receiver now exports some partition-specific metrics per-partition, with a `partition` tag (#30177)
+ The following metrics now render per partition:
+ - kafka_receiver_messages
+ - kafka_receiver_current_offset
+ - kafka_receiver_offset_lag
+
+
+## v0.92.0
+
+### 🛑 Breaking changes 🛑
+
+- `httpforwarder`: Use confighttp.HTTPDefaultClientSettings when configuring the HTTPClientSettings for the httpforwarder extension. (#6641)
+ By default, the HTTP forwarder extension will now use the defaults set in the extension:
+ * The idle connection timeout is set to 90s.
+ * The max idle connection count is set to 100.
+
+- `pkg/ottl`: Now validates against extraneous path segments that a context does not know how to use. (#30042)
+- `pkg/ottl`: Throw an error if keys are used on a path that does not allow them. (#30162)
+- `tanzuexporter`: Remove tanzuexporter, user can still use versions 0.91. (#30184)
+- `zipkinexporter`: Use default client HTTP settings in zipkinexporter, move validation to config validation (#29931)
+
+### 🚩 Deprecations 🚩
+
+- `mdatagen`: Component is being moved to core to allow it to be used there as well. (#30173)
+- `k8sclusterreceiver`: deprecate optional k8s.kubeproxy.version resource attribute (#29748)
+- `configschema`: Deprecating configschema to prefer generating documentation as part of its metadata generation with mdatagen (#30187)
+
+### 🚀 New components 🚀
+
+- `failoverconnector`: PR provides core logic for failover connector and implements failover for trace signals (#20766)
+- `failoverconnector`: PR extends failover connector for metric and log pipelines (#20766)
+- `namedpipereceiver`: Add "namedpipereceiver" that allows ingesting logs over a Named Pipe (#27234)
+
+### 💡 Enhancements 💡
+
+- `encoding/jaegerencodingextension`: Add support for JSON protocol for jaeger codec (#6272)
+- `githubgen`: Adds a set of distribution reports that can be used to notify distribution maintainers of any changes to distributions. (#28628)
+- `vcenterreceiver`: Add explicit statement of support for version 8 of ESXi and vCenter (#30274)
+- `carbonexporter`: Add support for resourcetotelemetry (#29879)
+- `carbonexporter`: Add retry and queue, use standard configs (#29862)
+- `carbonexporter`: Add ability to configure max_idle_conns (#30109)
+- `mdatagen`: add Meter/Tracer methods to simplify instrumenting components (#29927)
+- `servicegraphprocessor`: update own telemetry to use otel (#29917)
+- `datadogexporter`: DataDog log timestamp (ie. '@timestamp') now includes milliseconds (#29785)
+- `exporter/elasticsearch`: set the User-Agent header in the outgoing HTTP requests. (#29898)
+- `elasticsearchexporter`: add missing trace status description in span (#27645)
+- `routingconnector`: routingconnector supports matching the statement only once (#26353)
+- `filestatsreceiver`: Add a file.count metric to filestatsreceiver that reports the number of files matched by the receiver (#24651)
+- `filterprocessor`: Add telemetry for metrics, logs, and spans that were intentionally dropped via filterprocessor. (#13169)
+- `googlecloudpubsubexporter`: Expose `Endpoint` and `Insecure` in configuration. (#29304)
+- `exporter/honeycombmarker`: set the User-Agent header in the outgoing HTTP requests (#29894)
+- `pkg/ottl`: Add Hour OTTL Converter (#29468)
+- `kafkaexporter`: add ability to publish kafka messages with message key of TraceID - it will allow partitioning of the kafka Topic. (#12318)
+- `kafkareceiver`: Add three new metrics to record unmarshal errors. (#29302)
+- `kineticaexporter`: added metrics handling (#27239)
+- `logzioexporter`: add scopename to exported logs (#20659)
+ when it exists, scope name will be added to exported logs under the scopeName field.
+- `hostmetricsreceiver`: Add `system.memory.limit` metric reporting the total memory available. (#30306)
+ This metric is opt-in. To enable it, set `scrapers::memory::metrics::system.memory.limit::enabled` to `true` in the hostmetrics config.
+
+- `datadogexporter`: Add support for more semantic conventions related to host metadata (#30158)
+ The following semantic conventions are now detected for host metadata:
+ - `host.ip`
+ - `host.mac`
+ - `system.cpu.physical.count`
+ - `system.cpu.logical.count`
+ - `system.cpu.frequency`
+ - `system.memory.limit`
+
+- `prometheusexporter`: Accumulate histograms with delta temporality (#4968)
+- `kafkaexporter`: Adds the ability to configure the Kafka client's Client ID. (#30144)
+- `pkg/stanza`: Remove sampling policy from logger (#23801)
+- `resourcedetectionprocessor`: Add "aws.ecs.task.id" attribute (#8274)
+ Resourcedetectionprocessor now exports "aws.ecs.task.id" attribute, in addition to "aws.ecs.task.arn".
+ This allows exporters like "awsemfexporter" to automatically pick up that attribute and make it available
+ in templating (e.g. to use in CloudWatch log stream name).
+
+- `spanmetricsconnector`: Fix OOM issue for spanmetrics by limiting the number of exemplars that can be added to a unique dimension set (#27451)
+- `connector/spanmetrics`: Configurable resource metrics key attributes, filter the resource attributes used to create the resource metrics key. (#29711)
+ This enhancement can be used to fix broken spanmetrics counters after a span producing service restart, when resource attributes contain dynamic/ephemeral values (e.g. process id).
+- `splunkhecreceiver`: Returns json response in raw endpoint when it is successful (#20766)
+- `logicmonitorexporter`: add support for log resource mapping configurations (#29732)
+- `sqlqueryreceiver`: Swap MS SQL Server driver from legacy 'denisenkom' to official Microsoft fork (#27200)
+
+### 🧰 Bug fixes 🧰
+
+- `awsemfexporter`: AWS EMF Exporter will drop metrics that contain Inf values to avoid JSON marshal errors. (#29336)
+- `azuretranslatorpkg`: When receiving data from Azure some data does not meet the Common Specifications when sending the timestamp. Allow the attribute timeStamp to be used as an alternative to the standard time. (#28806)
+- `datadogconnector`: Add feature flag to address memory issue with Datadog Connector (#29755)
+- `filterset`: Fix concurrency issue when enabling caching. (#11829)
+- `pkg/ottl`: Fix issue with the hash value of a match subgroup in replace_pattern functions. (#29409)
+- `opampsupervisor`: Fix panic on agent shutdown (#29955)
+- `prometheusreceiver`: Fix configuration validation to allow specification of Target Allocator configuration without providing scrape configurations (#30135)
+- `carbonexporter`: Fix metric with empty numberdatapoint serialization (#30182)
+- `wavefrontreceiver`: Return error if partially quoted (#30315)
+- `hosmetricsreceiver`: change cpu.load.average metrics from 1 to {thread} (#29914)
+- `bearertokenauthextension`: Http receiver trying to get the authorization with the lower case from headers, But The headers from Http is received as Authorization capitalcase even-though we sent in lower case and Always return 401 Unauthorized (#24656)
+- `pkg/ottl`: Fix bug where the Converter `IsBool` was not usable (#30151)
+- `prometheusremotewriteexporter`: sanitize retry default settings (#30286)
+- `snowflakereceiver`: Fixed bug where storage metrics for snowflake were not being reported (#29750)
+- `apachesparkreceiver`: propagate application list errors to reveal underlying issue (#30278)
+- `haproxyreceiver`: Support empty values in haproxy stats. (#30252)
+- `time`: The `%z` strptime format now correctly parses `Z` as a valid timezone (#29929)
+ `strptime(3)` says that `%z` is "an RFC-822/ISO 8601 standard
+ timezone specification", but the previous code did not allow the
+ string "Z" to signify UTC time, as required by ISO 8601. Now, both
+ `+0000` and `Z` are recognized as UTC times in all components that
+ handle `strptime` format strings.
+
+
+## v0.91.0
+
+### 🚀 New components 🚀
+
+- `alertmanagerexporter`: Add Alertmanager exporter implementation and tests (#23569)
+
+### 💡 Enhancements 💡
+
+- `spanmetricsconnector`: Add exemplars to sum metric (#27451)
+- `exporter/datadogexporter`: Add support for nested log attributes. (#29633)
+- `jaegerreceiver,jaegerremotesamplingextension`: mark featuregates to replace Thrift-gen with Proto-gen types for sampling strategies as stable (#27636)
+ The following featuregates are stable:
+ - extension.jaegerremotesampling.replaceThriftWithProto
+ - receiver.jaegerreceiver.replaceThriftWithProto
+
+- `awsemfexporter/awscloudwatchlogsexporter`: Add component name to user agent header for outgoing put log even requests (#29595)
+- `elasticsearchexporter`: Logstash format compatibility. Traces or Logs data can be written into an index in logstash format. (#29624)
+- `extension/opampextension`: Implement `extension.NotifyConfig` to be notified of the Collector's effective config and report it to the OpAMP server. (#27293)
+- `receiver/influxdbreceiver`: Endpoint `/ping` added to enhance compatibility with third party products (#29594)
+- `kafkareceiver`: Add the ability to consume logs from Azure Diagnostic Settings streamed through Event Hubs using the Kafka API. (#18210)
+- `resourcedetectionprocessor`: Add detection of host.ip to system detector. (#24450)
+- `resourcedetectionprocessor`: Add detection of host.mac to system detector. (#29587)
+- `pkg/ottl`: Add `silent` ErrorMode to allow disabling logging of errors that are ignored. (#29710)
+- `postgresqlreceiver`: Add config property for excluding specific databases from scraping (#29605)
+- `redisreceiver`: Upgrade the redis library dependency to resolve security vulns in v7 (#29600)
+- `signalfxexporter`: Enable HTTP/2 health check by default (#29716)
+- `splunkhecexporter`: Enable HTTP/2 health check by default (#29717)
+- `statsdreceiver`: Add support for 'simple' tags that do not have a defined value, to accommodate DogStatsD metrics that may utilize these. (#29012)
+ This functionality is gated behind a new `enable_simple_tags` config boolean, as it is not part of the StatsD spec.
+
+### 🧰 Bug fixes 🧰
+
+- `exporter/prometheusremotewrite`: prometheusremotewrite exporter fix created metrics missing timestamp (#24915)
+- `connector/spanmetrics`: Fix memory leak when the cumulative temporality is used. (#27654)
+- `awscontainerinsightreceiver`: Filter terminated pods from node request metrics (#27262)
+- `clickhouseexporter`: Fix regression error introduced in #29095 (#29573)
+- `prometheusexporter`: Fix panic when exporter mutates data (#29574)
+- `splunkhecexporter`: Do not send null event field values in HEC events. Replace null values with an empty string. (#29551)
+- `k8sobjectsreceiver`: fix k8sobjects receiver fails when some unrelated Kubernetes API is down (#29706)
+- `resourcedetectionprocessor`: Change type of `host.cpu.model.id` and `host.cpu.model.family` from int to string. (#29025)
+ - Disable the `processor.resourcedetection.hostCPUModelAndFamilyAsString` feature gate to get the old behavior.
+
+- `Fix problem where checkpoints could be lost when collector is shutdown abruptly`: filelogreceiver (#29609, #29491)
+- `googlecloudspannerreceiver`: Google Cloud Spanner Receiver currently generates an exception and exits if it attempts to read data from a database that doesn't exist. However it's normal for a single receiver to poll multiple databases, so this is not graceful failure. This PR makes a change to gracefully generate an error in case of an unreadable missing database and then continue reading other databases.. (#26732)
+- `pkg/stanza`: Allow `key_value_parser` to parse values that contain the delimiter string. (#29629)
+
+## v0.90.1
+
+### 🧰 Bug fixes 🧰
+
+- `exporters`: Upgrade core dependency to remove noisy "Exporting finished" log message in exporters. (#29612)
+
+## v0.90.0
+
+### 🛑 Breaking changes 🛑
+
+- `dockerstatsreceiver`: Add [container.cpu.limit], [container.cpu.shares] and [container.restarts] metrics from docker container api (#21087)
+ It requires API version 1.25 or greater.
+
+### 🚀 New components 🚀
+
+- `failoverconnector`: New component that will allow for pipeline failover triggered by the health of target downstream exporters (#20766)
+- `gitproviderreceiver`: add repo, branch, and contributor count metrics (#22028)
+
+### 💡 Enhancements 💡
+
+- `opensearchexporter`: Promote opensearchexporter to alpha. (#24668)
+- `awsemfexporter`: Improve NaN value checking for Summary metric types. (#28894)
+- `awsemfexporter`: Logs relating to the start and finish of processing metrics have been reduced to debug level (#29337)
+- `azuremonitorreceiver`: Support Azure gov cloud (#27573)
+- `clickhouseexporter`: Added support for more control over TTL configuration. Currently, it supports timelines only in days, now also in hours, minutes and seconds (propertyName ttl_days --> ttl). (#28675)
+- `datasetexporter`: Collect usage metrics with Otel and send grouped attributes in session info. (#27650, #27652)
+- `resourcedetectionprocessor`: Add k8s cluster name detection when running in EKS (#26794)
+- `pkg/ottl`: Add new IsDouble function to facilitate type checking. (#27895)
+- `configschema`: Generate metadata for connectors. (#26990)
+- `telemetrygen`: Exposes the span duration as a command line argument `--span-duration` (#29116)
+- `honeycombmarkerexporter`: Change honeycombmarkerexporter to alpha (#27666)
+- `mysqlreceiver`: expose tls in mysqlreceiver (#29269)
+ If `tls` is not set, the default is to disable TLS connections.
+- `processor/transform`: Convert between sum and gauge in metric context when alpha feature gate `processor.transform.ConvertBetweenSumAndGaugeMetricContext` enabled (#20773)
+- `receiver/mongodbatlasreceiver`: adds project config to mongodbatlas metrics to filter by project name and clusters. (#28865)
+- `pkg/stanza`: Add "namedpipe" operator. (#27234)
+- `pkg/resourcetotelemetry`: Do not clone data in pkg/resourcetotelemetry by default (#29327)
+ - The resulting consumer will be marked as `MutatesData` instead
+
+- `pkg/stanza`: Improve performance by not calling decode when nop encoding is defined (#28899)
+- `exporter/prometheusremotewrite`: prometheusremotewrite exporter add option to send metadata (#13849)
+- `receivercreator`: Added support for discovery of endpoints based on K8s services (#29022)
+ By discovering endpoints based on K8s services, a dynamic probing of K8s service leveraging for example the httpcheckreceiver get enabled
+- `signalfxexporter`: change default timeout to 10 seconds (#29436)
+- `awss3exporter`: add support for `s3_force_path_style` and `disable_ssl` parameters (#29331)
+ In order to support alternative object-storage, these parameters are useful and help to leverage those systems not
+ compatible with domain style path, or just hosted without ssl (like just deployed in a k8s namespace).
+
+- `hostmetricsreceiver`: Add optional Linux-only metric `system.linux.memory.available` (#7417)
+ This is an alternative to `system.memory.usage` metric with `state=free`.
+ Linux starting from 3.14 exports "available" memory. It takes "free" memory as a baseline, and then factors in kernel-specific values.
+ This is supposed to be more accurate than just "free" memory.
+ For reference, see the calculations [here](https://superuser.com/a/980821).
+ See also `MemAvailable` in [/proc/meminfo](https://man7.org/linux/man-pages/man5/proc.5.html).
+
+- `azuremonitorexporter`: Updated Azure Monitor Exporter service version from v2.0 to v2.1. (#29234)
+
+### 🧰 Bug fixes 🧰
+
+- `cassandraexporter`: Exist check for keyspace and dynamic timeout (#27633)
+- `datadogreceiver`: Fix set telemetry.sdk.language=dotnet instead of .NET (#29459)
+- `filelogreceiver`: Fix issue where files were unnecessarily kept open on Windows (#29149)
+- `receiver/activedirectoryds`: Fix shutdown of `activedirectorydsreceiver` when shutdown was called right after creation, without a corresponding start call. (#29505)
+- `honeycombmarkerexporter`: Fix default api_url and dataset_slug (#29309)
+- `influxdbexporter`: When InfluxDB v1 compatibility is enabled AND username&password are set, the exporter panics. Not any more! (#27084)
+- `mongodbreceiver`: add `receiver.mongodb.removeDatabaseAttr` Alpha feature gate to remove duplicate database name attribute (#24972)
+- `pkg/stanza`: Fix panic during stop for udp async mode only. (#29120)
+
+## v0.89.0
+
+### 🛑 Breaking changes 🛑
+
+- `pkg/stanza`: Improve parsing of Windows Event XML by handling anonymous `Data` elements. (#21491)
+ This improves the contents of Windows log events for which the publisher manifest is unavailable. Previously, anonymous `Data` elements were ignored. This is a breaking change for users who were relying on the previous data format.
+- `processor/k8sattributes`: Graduate "k8sattr.rfc3339" feature gate to Beta. (#28817)
+ Time format of `k8s.pod.start_time` attribute value migrated from RFC3339:
+ Before: 2023-07-10 12:34:39.740638 -0700 PDT m=+0.020184946
+ After: 2023-07-10T12:39:53.112485-07:00
+ The feature gate can be temporary reverted back by adding `--feature-gate=-k8sattr.rfc3339` to the command line.
+
+- `filelogreceiver`: Change "Started watching file" log behavior (#28491)
+ Previously, every unique file path which was found by the receiver would be remembered indefinitely.
+ This list was kept independently of the uniqueness / checkpointing mechanism (which does not rely on the file path).
+ The purpose of this list was to allow us to emit a lot whenever a path was seen for the first time.
+ This removes the separate list and relies instead on the same mechanism as checkpointing. Now, a similar log is emitted
+ any time a file is found which is not currently checkpointed. Because the checkpointing mechanism does not maintain history
+ indefintiely, it is now possible that a log will be emitted for the same file path. This will happen when no file exists at
+ the path for a period of time.
+
+- `dockerstatsreceiver`: cpu.container.percent metric is removed in favor of container.cpu.utilization (#21807)
+ The metric `container.cpu.percentage` is now removed. `container.cpu.utilization` is enabled by default as a replacement.
+ For details, see the [docs](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/dockerstatsreceiver#transition-to-cpu-utilization-metric-name-aligned-with-opentelemetry-specification).
+
+- `encoding extensions`: Rename encoding extensions for consistency with storage extensions (#24451)
+ - `jaegerencoding` -> `jaeger_encoding`
+ - `otlpencoding` -> `otlp_encoding`
+ - `textencoding` -> `text_encoding`
+ - `zipkinencoding` -> `zipkin_encoding`
+
+- `remoteobserverprocessor`: Rename remoteobserverprocessor to remotetapprocessor (#27873)
+- `collectdreceiver`: Stop using opencensus metrics, use the obsrecv format (#25148)
+
+### 🚩 Deprecations 🚩
+
+- `datadogexporter`: Deprecate config `traces::peer_service_aggregation` in favor of `traces::peer_tags_aggregation` (#29089)
+- `postgresqlreceiver`: Deprecation of postgresql replication lag metrics `postgresql.wal.lag` in favor of more precise 'postgresql.wal.delay' (#26714)
+
+### 🚀 New components 🚀
+
+- `extension/opampextension`: Add a new extension that implements an OpAMP agent for reporting the collector's health and effective configuration. (#16462)
+- `sumologicprocessor`: add Sumo Logic Processor (#23946)
+ move processor from https://github.com/SumoLogic/sumologic-otel-collector/ repository
+- `alertmanagerexporter`: Add new exporter for sending events as alerts to Alertmanager (#23569)
+- `remotetapextension`: Add a new extension, remotetapextension to use with the remoteobserverprocessor processors. (#19634)
+- `otlpencodingextension`: Introduce OTLP encoding extension (#6272)
+- `pkg/translator/azure`: Create a translator for Azure Resource Log format (#18210)
+
+### 💡 Enhancements 💡
+
+- `awsxrayexporter`: Convert individual HTTP error events into exceptions within subsegments for AWS SDK spans and strip AWS.SDK prefix from remote aws service name (#27232)
+- `azuremonitorexporter`: Added connection string support to the Azure Monitor Exporter (#28853)
+ This enhancement simplifies the configuration process and aligns the exporter with Azure Monitor's recommended practices.
+ The Connection String method allows the inclusion of various fields such as the InstrumentationKey and IngestionEndpoint
+ within a single string, facilitating an easier and more integrated setup.
+ While the traditional InstrumentationKey method remains supported for backward compatibility, it will be phased out.
+ Users are encouraged to adopt the Connection String approach to ensure future compatibility and to leverage the broader
+ configuration options it enables.
+
+- `opensearchexporter`: Add log exporting capability to the opensearchexporter. (#23611)
+- `pdatatest`: Allow to compare metrics resource attributes or metric attribute values by matching on a portion of the dimension value with a regular expression. (#27690)
+ Use `MatchResourceAttributeValue("node_id", "cloud-node")` to match two metrics with a resource attribute value that starts with "cloud-node".
+ Use `MatchMetricAttributeValue("hostname", "container-tomcat-", "gauge.one", "sum.one")` to match metrics with the `hostname` attribute starting with `container-tomcat-`.
+
+- `processor/tailsampling`: adds optional upper bound duration for sampling (#26115)
+- `clickhouseexporter`: Add persistent storage support to clickhouse exporter (#27653)
+- `azuremonitorexporter`: Added documentation to describe how to use with the AAD Auth Proxy and enable AAD based authentication. (#24451)
+- `azuremonitorexporter`: Extended Azure Monitor exporter to support persistent queue. Default is for QueueSettings.Enabled to be false. (#25859)
+- `collectdreceiver`: Add support of confighttp.HTTPServerSettings (#28811)
+- `collectdreceiver`: Promote collectdreceiver as beta component (#28658)
+- `receiver/hostmetricsreceiver`: Added support for host's cpuinfo frequnecies. (#27445)
+ In Linux the current frequency is populated using the values from /proc/cpuinfo. An os specific implementation will be needed for Windows and others.
+- `datadogexporter`: Add a new traces config `trace_buffer` that specifies the number of outgoing trace payloads to buffer before dropping. (#28577)
+ If you start seeing log messages like `Payload in channel full. Dropped 1 payload.` in the datadog exporter, consider setting a higher `trace_buffer` to avoid traces being dropped.
+- `datadogexporter`: Add a new config `traces::peer_tags_aggregation` that enables aggregation of peer related tags in Datadog exporter (#29089)
+- `receiver/hostmetrics/scrapers/process`: add configuration option to mute `error reading username for process` (#14311, #17187)
+- `syslogexporter`: Promote syslogexporter to alpha and add it to otelcontribcol (#21242, #21244, #21245)
+- `azureevenhubreceiver`: Allow the Consumer Group to be set in the Configuration. (#28633)
+- `spanmetricsconnector`: Add Events metric to span metrics connector that adds list of event attributes as dimensions (#27451)
+- `exceptionsconnector`: Add trace id and span id to generated logs from exceptions when using exceptionsconnector. (#24407)
+- `processor/k8sattribute`: support adding labels and annotations from node (#22620)
+- `windowseventlogreceiver`: Add parsing for Security and Execution event fields. (#27810)
+- `filelogreceiver`: Add the ability to order files by mtime, to only read the most recently modified files (#27812)
+- `wavefrontreceiver`: Wrap metrics receiver under carbon receiver instead of using export function (#27248)
+- `exporter/datadog`: Added the "exporter.datadogexporter.DisableAPMStats" feature gate to disable APM stats computation. (#28615)
+- `pkg/ottl`: Add IsBool function into OTTL (#27897)
+- `k8sclusterreceiver`: add k8s.node.condition metric (#27617)
+- `kafka`: Expose resolve_canonical_bootstrap_servers_only (#26022)
+- `mongodbatlasreceiver`: Enhanced collector logs to include more information about the MongoDB Atlas API calls being made during logs retrieval. (#28851)
+- `datadogexporter`: Add support for host.cpu attributes. (#29156)
+- `datadogexporter`: Add support for custom container tags via resource attributes prefixed by `datadog.container.tag.*`. (#29156)
+- `receiver/mongodbatlasreceiver`: emit resource attributes "`mongodb_atlas.region.name`" and "`mongodb_atlas.provider.name`" on metric scrape. (#28833)
+- `pkg/golden`: Move the internal/coreinternal/golden folder to pkg/golden (#28594)
+- `processor/resourcedetection`: Add `processor.resourcedetection.hostCPUModelAndFamilyAsString` feature gate to change the type of `host.cpu.family` and `host.cpu.model.id` attributes from `int` to `string`. (#29025)
+ This feature gate will graduate to beta in the next release.
+
+- `tailsamplingprocessor`: Optimize performance of tailsamplingprocessor (#27889)
+- `redisreceiver`: include server.address and server.port resource attributes (#22044)
+- `servicegraphprocessor, servicegraphconnector`: Add a config option to periodically flush metrics, instead of flushing on every push. (#27679)
+- `spanmetricsconnector`: Add exemplars to sum metric (#27451)
+- `exporter/syslog`: send syslog messages in batches (#21244)
+ This changes the behavior of the Syslog exporter to send each batch of Syslog messages in a single request (with messages separated by newlines), instead of sending each message in a separate request and closing the connection after each message.
+- `cmd/telemetrygen`: Use exporter per worker for better metrics throughput (#26709)
+- `cmd/telemetrygen`: Add support for --otlp-http for telemetrygen logs (#18867)
+- `exporter/awss3exporter`: This feature allows role assumption for s3 exportation. It is especially useful on Kubernetes clusters that are using IAM roles for service accounts (#28674)
+
+### 🧰 Bug fixes 🧰
+
+- `lokiexporter`: The tenant attribute is now not automatically promoted to a label. (#21045)
+ To add tenant attributes (resource/record) to labels, use the label hints explicitly.
+- `azuretranslator`: Allow numeric fields to use a String or Integer representation in JSON. (#28650)
+- `extension/zipkinencodingextension`: Fix bug when err is nil if invalid protocol value is supplied. (#28686)
+- `filelogreceiver`: Fix issue where counting number of logs emitted could cause panic (#27469, #29107)
+- `lokireceiver`: Fix issue where counting number of logs emitted could cause panic (#27469, #29107)
+- `kafkareceiver`: Fix issue where counting number of logs emitted could cause panic (#27469, #29107)
+- `k8sobjectsreceiver`: Fix issue where counting number of logs emitted could cause panic (#27469, #29107)
+- `fluentforwardreceiver`: Fix issue where counting number of logs emitted could cause panic (#27469, #29107)
+- `otlpjsonfilereceiver`: Fix issue where counting number of logs emitted could cause panic (#27469, #29107)
+- `datadogconnector`: Mark datadogconnector as `MutatesData` to prevent data race (#29111)
+- `azureeventhubreceiver`: Updated documentation around Azure Metric to OTel mapping. (#28622)
+- `receiver/hostmetrics`: Fix panic on load_scraper_windows shutdown (#28678)
+- `apachesparkreceiver`: Replacing inaccurate units for the spark.job.stage.active and spark.job.stage.result metrics for the Apache Spark receiver. (#29104)
+- `splunkhecreceiver`: Do not encode JSON response objects as string. (#27604)
+- `processor/k8sattributes`: Set attributes from namespace/node labels or annotations even if node/namespaces attribute are not set. (#28837)
+- `datadogexporter`: Only extract DD container tags from resource attributes. Previously, container tags were also extracted from span attributes. (#29156)
+- `datadogexporter`: Only add container tags in dedicated container tag section. Previously, container tags were also added as span tags. Container tags will now only be accessible via the span container tab, and not as span tags. (#29156)
+- `pkg/stanza`: Fix data-corruption/race-condition issue in udp async (reuse of buffer); use buffer pool isntead. (#27613)
+- `datadogexporter`: Fixes potential log records loss on a transient network/connectivity error (#24550)
+ The Datadog exporter threats network/connectivity errors (http client doesn't receive a response) as permanent errors, which can lead to log records loss. This change makes these errors retryable.
+- `servicegraphprocessor, servicegraphconnector`: Measure latency in seconds instead of milliseconds (#27488)
+ Measures latency in seconds instead of milliseconds, as the metric name indicates.
+ Previously, milliseconds was used.
+ This unit is still available via the feature gate `processor.servicegraph.legacyLatencyUnitMs`.
+ This is a breaking change.
+
+- `sshcheckreceiver`: Use key_file instead of keyfile for the key in config. Aligns project practice, code, and docs. (#27035)
+- `zipkinreceiver`: Return BadRequest in case of permanent errors (#4335)
+
+## v0.88.0
+
+### 🛑 Breaking changes 🛑
+
+- `k8sclusterreceiver`: Remove opencensus.resourcetype resource attribute (#26487)
+- `splunkhecexporter`: Remove `max_connections` configuration setting. (#27610)
+ use max_idle_conns or max_idle_conns_per_host instead.
+
+- `signalfxexporter`: Remove `max_connections` configuration setting. (#27610)
+ use max_idle_conns or max_idle_conns_per_host instead.
+
+
+### 🚩 Deprecations 🚩
+
+- `dockerstatsreceiver`: cpu.container.percent metric will be deprecated in v0.79.0 in favor of container.cpu.utilization (#21807)
+ The metric `container.cpu.percentage` is now disabled by default and will be removed in v0.88.0.
+ As a replacement, the following metric is now enabled by default: `container.cpu.utilization`.
+ For details, see the [docs](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/dockerstatsreceiver#transition-to-cpu-utilization-metric-name-aligned-with-opentelemetry-specification).
+
+- `parquetexporter`: Remove the parquet exporter (#27284)
+
+### 🚀 New components 🚀
+
+- `encoding/jsonlogencodingextension`: Add a new extension to support JSON encoding (only logs) (#6272)
+- `honeycombmarkerexporter`: This component will export markers to be consumed by the Honeycomb Markers API to highlight user events (#26653)
+- `zipkinencodingextension`: Introduce zipkin encoding extension. (#6272)
+
+### 💡 Enhancements 💡
+
+- `datasetexporter`: Make export of resources and scopes more flexible (#27651, #27649)
+- `pkg/stanza`: Add option to run udp logs receiver (and stanza udp input operator) concurrently to reduce data-loss during high-scale scenarios (#27613)
+- `receiver/prometheus`: Warn instead of failing when users rename using metric_relabel_configs in the prometheus receiver (#5001)
+- `awscloudwatchlogsexporter/awsemfexporter`: Reduce noisy logs emitted by CloudWatch Logs Pusher. (#27774)
+ The Collector logger will now write successful CloudWatch API writes at the Debug level instead of Info level.
+- `k8sobjectsreceiver`: Move k8sobjectsreceiver from Alpha stability to Beta stability for logs. (#27635)
+- `datadogconnector`: Allow datadogconnector to be used as a traces-to-traces connector (#27846)
+- `doubleconverter`: Adding a double converter into pkg/ottl (#22056)
+- `syslogreceiver`: validate protocol name (#27581)
+- `elasticsearchexporter`: add missing scope info in span attributes (#27282)
+- `entension/storage/filestorage`: Add support for setting bbolt fsync option (#20266)
+- `filelogreceiver`: Add a new "top_n" option to specify the number of files to track when using ordering criteria (#23788)
+- `azuredataexplorerexporter`: Added exporter helper config support for Azure Data Explorer exporter (#24329)
+- `k8sclusterreceiver`: add optional k8s.pod.qos_class resource attribute (#27483)
+- `pkg/stanza`: Log warning, instead of error, when Windows Event Log publisher metadata is not available and cache the successfully retrieved ones. (#27658)
+- `pkg/ottl`: Add optional Converter parameters to replacement Editors (#27235)
+ Functions to modify matched text during replacement can now be passed as optional arguments to the following Editors:
+ - `replace_pattern`
+ - `replace_all_patterns`
+ - `replace_match`
+ - `replace_all_matches`
+
+- `awscloudwatchlogsexporter`: Improve the performance of the awscloudwatchlogsexporter (#26692)
+ Improve the performance by adding support to multiple consumers and removing locks and limiters that are no longer
+ necessary.
+
+- `pkg/pdatatest`: support ignore timestamps in span comparisons for pdatatest (#27688)
+- `prometheusremotewriteexporter`: addition of `max_batch_size_bytes` configurable parameter, to allow users to adjust it based on the capabilities of their specific remote storage (#21911)
+- `pkg/pdatatest`: support ignore span attribute value in span comparisons for ptracetest (#27689)
+- `pkg/pdatatest`: support ignore span ID in span comparisons for ptracetest (#27685)
+- `pkg/pdatatest`: support ignore trace ID in span comparisons for ptracetest (#27687)
+- `pkg/stanza`: When async is enabled for udp receiver, separate logic into readers (only read logs from udp port and push to channel), and processors (read logs from channel and process; decode, split, add attributes, and push downstream), allowing to change concurrency level for both readers and processors separately. (#27613)
+- `signalfxexporter`: Add an option to control the dimension client timeout (#27815)
+- `signalfxexporter`: Add the build version to the user agent of the SignalFx exporter (#16841)
+- `splunkentreceiver`: Users can now use auth settings and basicauth extension to connect to their Splunk enterprise deployments (#27026)
+
+### 🧰 Bug fixes 🧰
+
+- `datasetexporter`: Do not crash on NPE when any of the attributes contains null value. (#27648)
+- `syslog`: add integration tests and fix related bugs (#21245)
+- `processor/resourcedetection`: Don't parse the field `cpuInfo.Model` if it's blank. (#27678)
+- `k8sclusterreceiver`: Change clusterquota and resourcequota metrics to use {resource} unit (#10553)
+- `cmd/telemetrygen`: Fix `go install` for telemetrygen (#27855)
+- `pkg/ottl`: Fix bug where named parameters needed a space after the equal sign (`=`). (#28511)
+- `filelogreceiver`: Fix issue where batching of files could result in ignoring start_at setting. (#27773)
+- `prometheusremotewrite`: Fix remote write exporter not respecting retrySettings.enabled flag (#27592)
+- `redactionprocessor`: Fix mask when multiple patterns exist (#27646)
+
+## v0.87.0
+
+### 🛑 Breaking changes 🛑
+
+- `receiver/kubeletstats`: Fixes a bug where the "insecure_skip_verify" config was not being honored when "auth_type" is "serviceAccount" in kubelet client. (#26319)
+ Before the fix, the kubelet client was not verifying kubelet's certificate. The default value of the config is false,
+ so with the fix the client will start verifying tls cert unless the config is explicitly set to true.
+
+- `parquetexporter`: Deprecate the Parquet Exporter, it will be removed in the next release. (#27284)
+- `bug_fix`: Improve counting for the `count_traces_sampled` metric (#25882)
+- `extension/filestorage`: Replace path-unsafe characters in component names (#3148)
+
+### 🚩 Deprecations 🚩
+
+- `resourcedetectionprocessor`: Detect faas.instance in the gcp detector, and deprecate detecting faas.id in the gcp detector. (#26486)
+ faas.id has been removed from the semantic conventions.
+- `k8sclusterreceiver`: Deprecate opencensus.resourcetype resource attribute (#26487)
+ opencensus.resourcetype resource attribute is deprecated and disabled by default.
+
+### 🚀 New components 🚀
+
+- `encodingextension`: Add implementation of encodingextension (#6272)
+
+### 💡 Enhancements 💡
+
+- `processor/probabilisticsampler`: Allow non-bytes values to be used as the source for the sampling decision (#18222)
+- `receiver/azuremonitorreceiver`: Add support for authenticating using AD workload identity (#24451)
+- `kafkareceiver`: Allow users to attach kafka header metadata with the log/metric/trace record in the pipeline. Introduce a new config param, 'header_extraction' and some examples. (#24367)
+- `exporter/kafkaexporter`: Adding Zipkin encoding option for traces to kafkaexporter (#21102)
+- `kubeletstatsreceiver`: Support specifying context for `kubeConfig` `auth_type` (#26665)
+- `kubeletstatsreceiver`: Adds new `k8s.pod.cpu_limit_utilization`, `k8s.pod.cpu_request_utilization`, `k8s.container.cpu_limit_utilization`, and `k8s.container.cpu_request_utilization` metrics that represent the ratio of cpu used vs set limits and requests. (#27276)
+- `kubeletstatsreceiver`: Adds new `k8s.pod.memory_limit_utilization`, `k8s.pod.memory_request_utilization`, `k8s.container.memory_limit_utilization`, and `k8s.container.memory_request_utilization` metrics that represent the ratio of memory used vs set limits and requests. (#25894)
+- `filestatsreceiver`: Move the filestats receiver to beta stability (#27252)
+- `haproxyreceiver`: Move the haproxyreceiver to beta stability (#27254)
+- `splunkentreceiver`: adding additional metrics to the splunkentreceiver (#12667)
+- `cmd/telemetrygen`: Add support for custom telemetry attributes (#26505)
+
+### 🧰 Bug fixes 🧰
+
+- `processor/spanmetrics`: Prune histograms when dimension cache is pruned. (#27080)
+ Dimension cache was always pruned but histograms were not being pruned. This caused metric series created
+ by processor/spanmetrics to grow unbounded.
+
+- `syslogexporter`: use proper defaults according to RFCs (#25114)
+- `syslogparser`: return correct structure from syslog parser (#27414)
+- `splunkhecreceiver`: Fix receiver behavior when used for metrics and logs at the same time; metrics are no longer dropped. (#27473)
+- `metricstransformprocessor`: Fixes a nil pointer dereference when copying an exponential histogram (#27409)
+- `telemetrygen`: better defaults for http exporter mode (#26999)
+ - the url path default is now correct for both traces and metrics
+ - when not provided, the endpoint is automatically set to target a local gRPC or HTTP endpoint depending on the communication mode selected
+
+- `k8sclusterreceiver`: change k8s.container.ready, k8s.pod.phase, k8s.pod.status_reason, k8s.namespace.phase units to empty (#10553)
+- `k8sclusterreceiver`: Change k8s.node.condition* metric units to empty (#10553)
+- `syslogreceiver`: Fix issue where long tokens would be truncated prematurely (#27294)
+- `mongodbreceiver`: Fix mongo version not being collected (#27441)
+
+## v0.86.0
+
+### 🛑 Breaking changes 🛑
+
+- `jaegerexporter, jaegerthrifthttpexporter`: Removing deprecated jaeger and jaegerthrifthttp exporters (#26546)
+ This follows the deprecation plan to remove the component. The original removal date was July 2023, it is now past that.
+- `receiver/nginx`: Bump 'nginx.connections_current' gate to stable (#27024)
+
+### 💡 Enhancements 💡
+
+- `processor/tailsampling`: Allow sub-second decision wait time (#26354)
+- `processor/resourcedetection`: Added support for host's cpuinfo attributes. (#26532)
+ In Linux and Darwin all fields are populated. In Windows only family, vendor.id and model.name are populated.
+- `pkg/stanza`: Add 'omit_pattern' setting to `split.Config`. (#26381)
+ This can be used omit the start or end pattern from a log entry.
+- `skywaklingreceiver`: implement receiver for JVM metrics in Skywalking and adapted it to the OpenTelemetry protocol. (#20315)
+- `statsdreceiver`: Add TCP support to statsdreceiver (#23327)
+- `azuredataexplorerexporter`: Added an optional column in the exported trace data to store the status code and message as a dynamic field. (#26496)
+- `statsdreceiver`: Allow for empty tag sets (#27011)
+- `pkg/ottl`: Update contexts to set and get time.Time (#22010)
+- `pkg/ottl`: Add a Now() function to ottl that returns the current system time (#27038, #26507)
+- `filelogreceiver`: Log the globbing IO errors (#23768)
+- `exporter/loadbalancing`: Allow metrics routing (#25858)
+- `pkg/ottl`: Allow named arguments in function invocations (#20879)
+ Arguments can now be specified by a snake-cased version of their name in the function's
+ `Arguments` struct. Named arguments can be specified in any order, but must be specified
+ after arguments without a name.
+
+- `pkg/ottl`: Add new `TruncateTime` function to help with manipulation of timestamps (#26696)
+- `pkg/stanza`: Add 'overwrite_text' option to severity parser. (#26671)
+ Allows the user to overwrite the text of the severity parser with the official string representation of the severity level.
+
+- `prometheusreceiver`: add a new flag, enable_protobuf_negotiation, which enables protobuf negotiation when scraping prometheus clients (#27027)
+- `redisreceiver`: Added `redis.cmd.latency` metric. (#6942)
+- `processor/resourcedetectionprocessor`: add k8snode detector to provide node metadata; currently the detector provides `k8d.node.uid` (#26538)
+- `routingconnector`: Change routingconnector stability to alpha (#26495)
+- `supported platforms`: Add `linux/s390x` architecture to cross build tests in CI (#25138)
+- `telemetrygen`: Move the telemetrygen tool to use gRPC logging at warn level, in line with otlpgrpc. (#26659)
+- `splunkentreceiver`: adding component logic to splunkenterprise receiver (#12667)
+- `splunkhecreceiver`: Update splunk hec receiver to extract time query parameter if it is provided (#27006)
+- `cmd/telemetrygen`: Add CLI option for selecting different metric types (#26667)
+- `cloudflarereceiver`: Make TLS config optional for cloudflarereceiver (#26562)
+- `receiver/awscontainerinsightsreceiver`: Remove the need to set an env var in the receiver to get CPU and memory info (#24777)
+- `awsxrayexporter`: Change `exporter.awsxray.skiptimestampvalidation` feature gate from Alpha to Beta (#26553)
+- `processor/k8sattributes`: allow metadata extractions to be set to empty list (#14452)
+
+### 🧰 Bug fixes 🧰
+
+- `processor/tailsampling`: Prevent the tail-sampling processor from accepting duplicate policy names (#27016)
+- `awsemfexporter`: AWS EMF Exporter will not drop any metrics that contain NaN values to avoid JSON marshal errors. (#26267)
+- `k8sclusterreceiver`: Change k8s.deployment.available and k8s.deployment.desired metric units to {pod} (#10553)
+- `awsxrayexporter`: Restore the AWS X-Ray metadata structure when exporting. (#23610)
+- `telemetrygen`: remove need for JSON unmarshalling of trace status codes and unsupport mixed case input (#25906)
+- `haproxyreceiver`: Remove unused resource attributes. (#24920)
+- `k8sclusterreceiver`: Change k8scluster receiver metric units to follow otel semantic conventions (#10553)
+- `pkg/stanza`: Fix bug where force_flush_period not applied (#26691)
+- `filelogreceiver`: Fix issue where truncated file could be read incorrectly. (#27037)
+- `receiver/hostmetricsreceiver`: Make sure the process scraper uses the gopsutil context, respecting the `root_path` configuration. (#24777)
+ This regression was introduced by #24777
+- `k8sclusterreceiver`: change k8s.container.restarts unit from 1 to {restart} (#10553)
+
+## v0.85.0
+
+### 🛑 Breaking changes 🛑
+
+- `k8sclusterreceiver`: Remove deprecated Kubernetes API resources (#23612, #26551)
+ Drop support of HorizontalPodAutoscaler v2beta2 version and CronJob v1beta1 version.
+ Note that metrics for those resources will not be emitted anymore on Kubernetes 1.22 and older.
+
+- `prometheusexporters`: Append prometheus type and unit suffixes by default in prometheus exporters. (#26488)
+ Suffixes can be disabled by setting add_metric_suffixes to false on the exporter.
+- `attributesprocessor, resourceprocessor`: Transition featuregate `coreinternal.attraction.hash.sha256` to stable (#4759)
+
+### 💡 Enhancements 💡
+
+- `postgresqlreceiver`: Added `postgresql.database.locks` metric. (#26317)
+- `receiver/statsdreceiver`: Add support for distribution type metrics in the statsdreceiver. (#24768)
+- `pkg/ottl`: Add converters to convert time to unix nanoseconds, unix microseconds, unix milliseconds or unix seconds (#24686)
+- `oauth2clientauthextension`: Enable dynamically reading ClientID and ClientSecret from files (#26117)
+ - Read the client ID and/or secret from a file by specifying the file path to the ClientIDFile (`client_id_file`) and ClientSecretFile (`client_secret_file`) fields respectively.
+ - The file is read every time the client issues a new token. This means that the corresponding value can change dynamically during the execution by modifying the file contents.
+
+- `receiver/hostmetrics`: Don't collect connections data from the host if system.network.connections metric is disabled to not waste CPU cycles. (#25815)
+- `jaegerreceiver,jaegerremotesamplingextension`: Add featuregates to replace Thrift-gen with Proto-gen types for sampling strategies (#18401)
+ Available featuregates are:
+ - extension.jaegerremotesampling.replaceThriftWithProto
+ - receiver.jaegerreceiver.replaceThriftWithProto
+
+- `influxdbexporter`: Add user-configurable LogRecord dimensions (otel attributes -> InfluxDB tags) (#26342)
+- `k8sclusterreceiver`: Add optional k8s.kubelet.version, k8s.kubeproxy.version node resource attributes (#24835)
+- `k8sclusterreceiver`: Add k8s.pod.status_reason option metric (#24034)
+- `k8sobjectsreceiver`: Adds logic to properly handle 410 response codes when watching. This improves the reliability of the receiver. (#26098)
+- `k8sobjectreceiver`: Adds option to exclude event types (MODIFIED, DELETED, etc) in watch mode. (#26042)
+- `datadogexporter`: Host metadata for remote hosts is now reported on first sight or on change (#25145)
+ Host metadata for remote hosts will only be sent for payloads with the datadog.host.use_as_metadata resource attribute.
+
+
+### 🧰 Bug fixes 🧰
+
+- `processor/routing`: When using attributes instead of resource attributes, the routing processor would crash the collector. This does not affect the connector version of this component. (#26462)
+- `awsemfexporter`: Fix possible panic in when configuration option `awsemf.output_destination:stdout` is set (#26250)
+- `snmpreceiver`: Fix how to determine how many RAs on a metric are scalar (#26363)
+ We now create the proper number of resources for configurations where a resource uses fewer than the available number of scalar resource attribtues.
+- `processor/tailsampling`: Added saving instrumentation library information for tail-sampling (#13642)
+- `receiver/kubeletstats`: Fixes client to refresh service account token when authenticating with kubelet (#26120)
+- `datadogexporter`: Fixes crash when mapping OTLP Exponential Histograms with no buckets. These will now be dropped instead. (#26103)
+- `filelogreceiver`: Fix the behavior of the add operator to continue to support EXPR(env("MY_ENV_VAR")) expressions (#26373)
+- `snmpreceiver`: SNMP values of type Counter64 were seen as unsupported, because the returned data type unint64 was unhandeled. (#23897, #26119)
+- `pkg/stanza`: Fix issue unsupported type 'syslog_parser' (#26452)
+
+## v0.84.0
+
+### 🛑 Breaking changes 🛑
+
+- `jaegerreceiver`: Deprecate remote_sampling config in the jaeger receiver (#24186)
+ The jaeger receiver will fail to start if remote_sampling config is specified in it. The `receiver.jaeger.DisableRemoteSampling` feature gate can be set to let the receiver start and treat remote_sampling config as no-op. In a future version this feature gate will be removed and the receiver will always fail when remote_sampling config is specified.
+
+- `googlecloudexporter`: remove retry_on_failure from the googlecloud exporter. The exporter itself handles retries, and retrying can cause issues. (#57233)
+- `vcenterreceiver`: Dimensions performance metrics into metric attribute `object` (#25147)
+ The following metrics have been effected to include the new metric attribute to properly dimension the data.`vcenter.vm.network.throughput`,`vcenter.vm.network.usage`,`vcenter.vm.network.packet.count`,`vcenter.vm.disk.latency.avg`,`vcenter.vm.disk.latency.max`,`vcenter.host.network.usage`,`vcenter.host.network.throughput`,`vcenter.host.network.packet.count`,`vcenter.host.network.packet.errors`,
+ `vcenter.host.disk.latency.avg`,`vcenter.host.disk.latency.max`, and `vcenter.host.disk.throughput`. More information on how to migrate can be found at https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/vcenterreceiver#feature-gates
+
+
+### 🚩 Deprecations 🚩
+
+- `datadogprocessor`: Deprecation of Datadog processor in favor of Datadog connector (#19740)
+- `tanzuobservabilityexporter`: Deprecation of Tanzu Observability (Wavefront) Exporter in favor of native OTLP ingestion. (#24225)
+
+### 💡 Enhancements 💡
+
+- `redisreceiver`: Adding username parameter for connecting to redis (#24408)
+- `postgresqlreceiver`: Added `postgresql.temp_files` metric. (#26080)
+- `receiver/azuremonitor`: Added new attrbutes to the metrics like name, type and resource_group. (#24774)
+- `clickhouseexporter`: Change writing of metrics data to batch (#24403)
+- `signalfxexporter`: Added a mechanism to drop histogram buckets (#25845)
+- `journaldreceiver`: add support for identifiers (#20295)
+- `journaldreceiver`: add support for dmesg (#20295)
+- `cassandraexporter`: Allow custom port for Cassandra connection (#24391)
+- `pkg/ottl`: Add converters to covert duration to nanoseconds, microseconds, milliseconds, seconds, minutes or hours (#24686)
+- `snmpreceiver`: Support scalar OID resource attributes (#23373)
+ Add column and scalar OID metrics to resources that have scalar OID attributes
+- `googlemanagedprometheus`: Add a `add_metric_suffixes` option to the googlemanagedprometheus exporter. When set to false, metric suffixes are not added. (#26071)
+- `haproxyreceiver`: Add support for HTTP connections (#24440)
+- `cmd/telemetrygen`: Add cli flag --status-code for trace generation (#24286)
+- `kubeletstatsreceiver`: Add a new `uptime` metric for nodes, pods, and containers to track how many seconds have passed since the object started (#25867)
+- `opensearchexporter`: implement [OpenSearch](https://opensearch.org/) exporter. (#23611)
+- `pkg/ottl`: Add new `ExtractPatterns` converter that extract regex pattern from string. (#25834, #25856)
+- `pkg/ottl`: Add support for Log, Metric and Trace Slices to `Len` converter (#25868)
+- `lokitranslator`: Added Attributes support to the InstrumentationScope (#24027)
+- `lokitranslator`: Public method `LogToLokiEntry` from `pkg/loki/translator` now returns normalized (`.` replaced by `_`) label names (#26093)
+- `postgresqlreceiver`: Added `postgresql.deadlocks` metric. (#25688)
+- `postgresqlreceiver`: Added `postgresql.sequential_scans` metric. (#26096)
+- `prometheusreceiver`: The otel_scope_name and otel_scope_version labels are used to populate scope name and version. otel_scope_info is used to populate scope attributes. (#25870)
+- `receiver/prometheus`: translate units from prometheus to UCUM (#23208)
+- `snmpreceiver`: Add support for SNMP values of type counter64 (#23897)
+- `snmpreceiver`: Timeout for SNMP requests can now be configured. (#25885)
+- `telemetrygen`: The telemetrygen now supports setting the log's body (#26031)
+- `awsxrayexporter`: add `exporter.awsxray.skiptimestampvalidation` Alpha feature gate to remove xray timestamp restriction on first 32 bits of trace id (#26041)
+
+### 🧰 Bug fixes 🧰
+
+- `receiver_creator`: Update expr and relocate breaking `type` function to `typeOf` (#26038)
+- `azuremonitor_logexporter`: The log exporter now supports non-string data for the log record body. (#23422)
+- `vcenterreceiver`: Added a vcenter.resource_pool.inventory_path resource attribute to resource pool metrics in order to properly dimension resource pools of the same name. (#25831)
+- `loadbalancingexporter`: fix k8s service resolver retaining invalid old endpoints (#24914)
+- `prometheusremotewriteexporter`: Retry on 5xx status codes using `cenkalti/backoff` client (#20304)
+- `cmd/telemetrygen`: fix the default value of the arg status-code (#25849)
+
+## v0.83.0
+
+### 🛑 Breaking changes 🛑
+
+- `receiver/k8scluster`: Unify predefined and custom node metrics. (#24776)
+ - Update metrics description and units to be consistent
+ - Remove predefined metrics definitions from metadata.yaml because they are controlled by `node_conditions_to_report`
+ and `allocatable_types_to_report` config options.
+
+- `prometheusexporter`: Remove invalid unit translations from the prometheus exporters (#24647)
+- `receiver/prometheusexec`: Removes the deprecated prometheus_exec receiver (#24740)
+
+### 🚀 New components 🚀
+
+- `datadogconnector`: This is a new component that computes Datadog APM Stats in the event that trace pipelines are sampled. (#19740)
+ This component replaces the Datadog processor
+
+- `gitproviderreceiver`: Add the skeleton for the new gitproviderreceiver in development with accompanying github scraper. (#22028)
+
+### 💡 Enhancements 💡
+
+- `azuredataexplorerexporter`: Add support for managed identity. This enables users to not use Key based authentication (#21924)
+- `awsemfexporter`: Add awsemf.nodimrollupdefault feature gate to aws emf exporter (#23997)
+ Enabling the awsemf.nodimrollupdefault will cause the AWS EMF Exporter to use the NoDimensionRollup configuration
+ setting by default instead of ZeroAndSingleDimensionRollup.
+
+- `awss3exporter`: add Sumo Logic Installed Collector marshaler (#23212)
+- `receiver/collectdreceiver`: Migrate from opencensus to pdata, change collectd, test to match pdata format. (#20760)
+- `datasetexporter`: Make duration of shutdown procedure configurable to minimise data losses. (#24415)
+- `datasetexporter`: Make sure serverHost field is correctly and always populated on the DataSet events. For more information and available configuration options, please refer to the plugin readme file. (#20660, #24415)
+- `datadogreceiver`: add datadog trace and span id (#23057)
+- `pkg/ottl`: Add support for using addition and subtraction with time and duration (#22009)
+- `transformprocessor`: Add extract_count_metric OTTL function to transform processor (#22853)
+- `transformprocessor`: Add extract_sum_metric OTTL function to transform processor (#22853)
+- `prometheusreceiver`: Don't drop histograms without buckets (#22070)
+- `pkg/ottl`: Add a new Function Getter to the OTTL package, to allow passing Converters as literal parameters. (#22961)
+ Currently OTTL provides no way to use any defined Converter within another Editor/Converter.
+ Although Converters can be passed as a parameter, they are always executed and the result is what is actually passed as the parameter.
+ This allows OTTL to pass Converters themselves as a parameter so they can be executed within the function.
+
+- `resourcedetectionprocessor`: GCP resource detection processor can automatically add `gcp.gce.instance.hostname` and `gcp.gce.instance.name` attributes. (#24598)
+- `splunkhecexporter`: Add heartbeat check while startup and new config param, heartbeat/startup (defaults to false). This is different than the healtcheck_startup, as the latter doesn't take token or index into account. (#24411)
+- `hostmetricsreceiver`: Report logical and physical number of CPUs as metric. (#22099)
+ Use the `system.cpu.logical.count::enabled` and `system.cpu.physical.count::enabled` flags to enable them
+
+- `k8sclusterreceiver`: Allows disabling metrics and resource attributes (#24568)
+- `k8sclusterreceiver`: Reduce memory utilization (#24769)
+- `k8sattributes`: Added k8s.cluster.uid to k8sattributes processor to add cluster uid (#21974)
+- `resourcedetectionprocessor`: Collect heroku metadata available instead of exiting early. Log at debug level if metadata is missing to help troubleshooting. (#25059)
+- `hostmetricsreceiver`: Improved description of the system.cpu.utilization metrics. (#25115)
+- `cmd/mdatagen`: Avoid reusing the same ResourceBuilder instance for multiple ResourceMetrics (#24762)
+- `resourcedetectionprocessor`: Add detection of os.description to system detector (#24541)
+- `filelogreceiver`: Bump 'filelog.allowHeaderMetadataParsing' feature gate to beta (#18198)
+- `receiver/purefareceiver`: implement the custom label `fa_array_name` to act as a pretty label for metrics received. (#23889, #21248, #22027)
+- `receiver/prometheusreceiver`: Add config `report-extra-scrape-metrics` to report additional prometheus scraping metrics (#21040)
+ Emits additional metrics - scrape_body_size_bytes, scrape_sample_limit, scrape_timeout_seconds. scrape_body_size_bytes metric can be used for checking failed scrapes due to body-size-limit.
+- `receiver/sqlquery`: Set ObservedTimestamp on collected logs (#23776)
+- `exporter/awss3exporter`: Allow custom endpoints to be configured for exporting spans (#21833)
+- `cmd/telemetrygen`: Add ability to set custom path to endpoint. (#24551)
+- `telemetrygen`: Adds batch option to configure whether to batch traces, and size option to configure minimum size in MB of each trace for load testing. (#9597)
+- `webhookreceiver`: Add an optional config setting to set a required header that all incoming requests must provide (#24270)
+- `extension/jaegerremotesampling`: gRPC remote source usage in jaegerremotesampling extension propagates HTTP headers if set in gRPC client config (#24414)
+- `extension/jaegerremotesampling`: gRPC remote source usage in jaegerremotesampling extension supports optional caching via existing `reload_interval` config (#24840)
+
+### 🧰 Bug fixes 🧰
+
+- `receiver/sshcheck`: Add the SSH endpoint as a resource attribute (#24441)
+- `awsemfexporter`: Enforce time to live on metric data that is stored for the purpose of cumulative to delta conversions within EMF Exporter (#25058)
+ This change fixes a bug where the cache used to store metric information for cumulative to delta
+ conversions was not enforcing its time to live. This could cause excessive memory growth in certain scenarios which could
+ lead to OOM failures for Collector. To properly fix this issue package global metric caches were removed and replaced
+ with caches that are unique per emf exporter. A byproduct of this change is that no two emf exporters within an
+ Collector will share a caches leading to more accurate cumulative to delta conversions.
+
+- `awsemfexporter`: Add retain_initial_value_of_delta_metric to translateOTelToGroupedMetric, allowing the initial set of metrics to be published (#24051)
+- `carbonreceiver`: Fix Carbon receiver obsrecv operations memory leak (#24275)
+ The carbonreceiver has a memory leak where it will repeatedly open new obsrecv operations but not close them afterwards. Those operations eventually create a burden.
+
+ The fix is to make sure the receiver only creates an operation per interaction over TCP.
+
+- `datadogexporter`: Populate OTLP resource attributes in Datadog logs. Changes mapping for `jvm.loaded_classes` from `process.runtime.jvm.classes.loaded` to `process.runtime.jvm.classes.current_loaded`. (#24674)
+- `datadogexporter`: Fix the population of Datadog `system.*` metrics. Ensure the average is within [min, max] in histograms. (#25071)
+ The minimum and maximum estimation is only used when the minimum and maximum are not available in the OTLP payload or this is a cumulative payload.
+- `pkg/stanza`: Create a new decoder for each TCP/UDP connection to prevent concurrent write to buffer. (#24980)
+- `exporter/kafkaexporter`: Fixes a panic when SASL configuration is not present (#24797)
+- `lokitranslator, lokiexporter`: Fixes a panic that occurred during the promotion of nested attributes containing dots to labels (#25125)
+- `awsxrayexporter`: Fix X-Ray Segment status code and exception translations. (#24381)
+- `receiver/haproxy`: Make sure emitted resource metrics have distinct resources by default (#24921)
+ This is done by enabling and renaming the following resource attributes:
+ - proxy_name -> haproxy.proxy_name
+ - service_name -> haproxy.service_name
+
+- `receiver/k8sobjects`: Fix bug where duplicate data would be ingested for watch mode if the client connection got reset. (#24806)
+- `datadogreceiver`: Fixed NPE on failed to decode message path (#24562)
+- `zipkinreceiver`: Respects zipkin's serialised status tags to be converted to span status (#14965)
+- `datadogexporter`: Correctly set metrics exporter capabilities to state that it mutates data (#24908)
+ This could lead to random panics if using more than one metrics exporter.
+
+- `processor/resourcedetection`: Do not drop all system attributes if `host.id` cannot be fetched. (#24669)
+- `signalfxexporter`: convert vmpage_io* translated metrics to pages (#25064)
+- `splunkhecreceiver`: aligns success resp body w/ splunk enterprise (#19219)
+ changes resp from plaintext "ok" to json {"text":"success", "code":0}
+
+## v0.82.0
+
+### 🛑 Breaking changes 🛑
+
+- `receiver/awsfirehose`: Change the type of `Config.AccessKey` to be `configopaque.String` (#17273)
+- `receiver/cloudfoundry`: Change the type of `Config.UAA.Password` to be `configopaque.String` (#17273)
+- `exporter/datasetexporter`: Remove temporary client side attribute aggregation and corresponding traces.max_wait and traces.aggregate config options which are now redundant. (#20660)
+ This pull request removes the following attributes from the DataSet trace events: services,
+ span_count, error_count. Those attributes were populated on the client side as part of the client
+ side aggregation code. This client side aggregation was meant as a temporary solution until a
+ proper solution is implement on the server side. Being a temporary solution meant it had many
+ edge cases and would only work under specific and limited circumstances (all spans which belong
+ to a trace are part of the same batch received by the plugin).
+
+ Corresponding config options (traces.aggregate and traces.max_wait) which are not redundant and
+ unused have also been removed.
+
+- `mysqlreceiver`: removing `mysql.locked_connects` metric which is replaced by `mysql.connection.errors` (#23211)
+- `pkg/ottl`: Allow access to the metrics slice in the metric context (#24446)
+ This is only a breaking change for users using OTTL in custom components. For all Contrib components this is an enhancement.
+- `pkg/stanza`: Make fileconsumer.PositionalScanner internal (#23999)
+- `pkg/stanza`: Make fileconsumer.Fingerprint internal (#23998)
+- `receiver/httpcheck`: Fail fast on endpoint missing scheme (#23020)
+ Previously, when configured with an endpoint without HTTP/HTTPS scheme like "opentelemetry.io",
+ the receiver would start correctly, but fail to check the endpoint, producing the `httpcheck.error`
+ metric on every collection interval. After this change, the receiver fails to start, writing
+ an error log saying that you need to provide a scheme in the endpoint.
+
+- `receiver/jmx`: Change the types of `Config.{Password,KeystorePassword,TruststorePassword}` to be `configopaque.String` (#17273)
+- `httpcheckreceiver`: support scraping multiple targets (#18823)
+- `resourcedetectionprocessor`: Disable `host.id` by default on the `system` detector. This restores the behavior prior to v0.72.0 when using the `system` detector together with other detectors that set `host.id` (#21233)
+ To re-enable `host.id` on the `system` detector set `system::resource_attributes::host.id::enabled` to `true`:
+
+ ```
+ resourcedetection:
+ detectors: [system]
+ system:
+ resource_attributes:
+ host.id:
+ enabled: true
+ ```
+
+- `receiver/nsxt`: Change the type of `Config.Password` to be `configopaque.String` (#17273)
+- `receiver/podman`: Change the type of `Config.SSHPassphrase` to be `configopaque.String` (#17273)
+- `receiver/postgresql`: Change the type of `Config.Password` to be `configopaque.String` (#17273)
+- `prometheusreciever`: Remove unused buffer_period and buffer_count configuration options (#24258)
+- `prometheusreceiver`: Add the `trim_metric_suffixes` configuration option to allow enable metric suffix trimming. (#21743, #8950)
+ When enabled, suffixes for unit and type are trimmed from metric names.
+ If you previously enabled the `pkg.translator.prometheus.NormalizeName`
+ feature gate, you will need to enable this option to have suffixes trimmed.
+
+- `receiver/pulsar`: Change the types of `Config.Authentication.Token.Token` and `Config.Authentication.Athenz.PrivateKey` to be `configopaque.String` (#17273)
+- `receiver/rabbitmq`: Change the type of `Config.Password` to be `configopaque.String` (#17273)
+- `receiver/redis`: Change the type of `Config.Password` to be `configopaque.String` (#17273)
+- `receiver/riak`: Change the type of `Config.Password` to be `configopaque.String` (#17273)
+- `receiver/saphana`: Change the type of `Config.Password` to be `configopaque.String` (#17273)
+- `receiver/snmp`: Change the types of the `Config.{AuthPassword,PrivacyPassword}` fields to be of `configopaque.String` (#17273)
+- `receiver/snowflake`: Change the type of `Config.Password` to be `configopaque.String` (#17273)
+- `receiver/solace`: Change the type of `Config.Auth.PlainText.Password` to be `configopaque.String` (#17273)
+- `spanmetricsconnector`: Histograms will not have exemplars by default (#23872)
+ Previously spanmetricsconnector would attach every single span as an exemplar to the histogram.
+ Exemplars are now disabled by default. To enable them set `exemplars::enabled=true`
+
+- `receiver/vcenter`: Change the type of `Config.Password` to be `configopaque.String` (#17273)
+
+### 🚩 Deprecations 🚩
+
+- `dynatraceexporter`: Add deprecation note to Dynatrace metrics exporter (#23992)
+- `pkg/stanza`: Deprecate fileconsumer.EmitFunc in favor of fileconsumer.emit.Callback (#24036)
+- `pkg/stanza`: Deprecate filconsumer.Finder and related sortation structs and functions (#24013)
+- `servicegraphprocessor`: Service Graph Processor is deprecated in favor of the Service Graph Connector (#19737)
+
+### 🚀 New components 🚀
+
+- `exceptionsconnector`: A connector that generate metrics and logs from recorded applications exceptions from spans (#17272)
+- `opensearchexporter`: exports OpenTelemetry signals to [OpenSearch](https://opensearch.org/). (#23611)
+- `routingconnector`: A connector version of the routingprocessor (#19738)
+
+### 💡 Enhancements 💡
+
+- `lokiexporter, lokitranslator`: Added setting `default_labels_enabled`. This setting allows to make default labels `exporter`, `job`, `instance`, `level` optional (#22156)
+- `windowseventlogreceiver`: Add `exclude_providers` to the config. One or more event log providers to exclude from processing. (#21491)
+- `loadbalancingexporter`: Added docs for k8s service resolver. (#24287)
+- `loadbalancingexporter`: Added kubernetes service resolver to loadbalancingexporter. (#22776)
+- `opamp supervisor`: Add Capabilities support to Supervisor config. (#21044)
+- `opamp supervisor`: OpAMP Supervisor config file now supports "tls" settings in the "server" section. (#23848)
+- `pkg/ottl`: Add new `Len` converter that computes the length of strings, slices, and maps. (#23847)
+- `pkg/stanza`: Add option to skip log tokenization for both tcp and udp receivers. use the 'one_log_per_packet' setting to skip log tokenization if multiline is not used. (#23440)
+- `redisreceiver`: Adds unit to metrics where this was missing. (#23573)
+ Affected metrics can be found below.
+ - redis.role
+ - redis.cmd.calls
+ - redis.clients.connected
+ - redis.clients.blocked
+ - redis.keys.expired
+ - redis.keys.evicted
+ - redis.connections.received
+ - redis.connections.rejected
+ - redis.memory.fragmentation_ratio
+ - redis.rdb.changes_since_last_save
+ - redis.commands.processed
+ - redis.keyspace.hits
+ - redis.keyspace.misses
+ - redis.slaves.connected
+ - redis.db.keys
+ - redis.db.expires
+
+- `elasticsearchexporter`: Add span duration in span store. (#14538)
+- `exporter/datasetexporter`: Rename 'observed_timestamp' field on the DataSet event to 'sca:observedTimestamp' and ensure the value is nanoseconds since epoch, update serializing and handling of body / message field to ensure it's consistent with other DataSet integrations and allow user to disable exporting scope information with each event by setting 'export_scope_info_on_event' logs config option to false. (#20660, #23826)
+- `exporter/datasetexporter`: Correctly map LogRecord severity to DataSet severity, remove redundant DataSet event message field prefix (OtelExporter - Log -) and remove redundant DataSet event fields (flags, flags.is_sampled). (#20660, #23672)
+- `journaldreceiver`: fail if unsufficient permissions for journalctl command (#20906)
+- `pkg/ottl`: Adds support for using boolean expressions with durations (#22713)
+- `pkg/ottl`: Adds support for using boolean expressions with time objects (#22008)
+- `pkg/ottl`: Add new `Duration` converter to convert string to a Golang time.duration (#22015)
+- `kafkareceiver`: Added support for json-encoded logs for the kafkareceiver (#20734)
+- `resourcedetectionprocessor`: Support GCP Cloud Run Jobs in resource detection processor. (#23681)
+- `googlemanagedprometheusexporter`: GMP exporter now automatically adds target_info and otel_scope_info metrics. (#24372)
+- `googlemanagedprometheusexporter`: GMP exporter supports filtering resource attributes to metric labels. (#21654)
+- `hostmetricsreceiver`: Remove the need to set environment variables with hostmetricsreceiver (#23861)
+- `experimentalmetricmetadata`: Introduce experimental entity event data types (#23565)
+- `influxdbexporter`: limit size of write payload (#24001)
+- `k8sclusterreceiver`: Change k8s.clusterresourcequota metrics to use mdatagen (#4367)
+- `k8sclusterreceiver`: Change k8s.cronjob.active_jobs to use mdatagen (#10553)
+- `k8sclusterreceiver`: Change k8s.daemonset metrics to use mdatagen (#10553)
+- `k8sclusterreceiver`: Refactor k8s.job metrics to use mdatagen (#10553)
+- `k8sclusterreceiver`: Change k8s.replicaset metrics to use mdatagen (#10553)
+- `k8sclusterreceiver`: Update k8s.replicationcontroller metrics to use mdatagen (#10553)
+- `k8sattrprocessor`: Add k8sattr.rfc3339 feature gate to allow RFC3339 format for k8s.pod.start_time timestamp value. (#24016)
+ Timestamp value before and after.
+ `2023-07-10 12:34:39.740638 -0700 PDT m=+0.020184946`, `2023-07-10T12:39:53.112485-07:00`
+
+- `k8sclusterreceiver`: k8sclusterreceiver - Begin emitting entity events as logs (#24400)
+- `k8sclustereceiver`: Report entity state periodically (#24413)
+- `exporter/kafka`: Added support to Kafka exporter for configuring SASL handshake version (#21074)
+- `tailsamplingprocessor`: Added invert_match rule for numeric matcher, to support exclusion decision (#24563)
+- `cmd/mdatagen`: Simplify resource building in MetricsBuilder, suggest using ResourceBuilder instead. (#24443)
+- `cmd/mdatagen`: Introduce resource builder helper. (#24360)
+- `datadogexporter`: Add support for the `metrics::sums::initial_cumulative_monotonic_value` setting (#24544)
+ The setting has the same values and semantics as the `initial_value` setting from the `cumulativetodelta` processor
+
+- `datadogexporter`: Source resolution logic now runs all source providers in parallel to improve start times. (#24234)
+ All source providers now run in all environments so you may see more spurious logs from downstream dependencies when using the Datadog exporter. These logs should be safe to ignore.
+
+- `datadogexporter`: Add support for reporting host metadata from remote hosts. (#24290)
+ Resource attributes for each telemetry signal will be used for host metadata if the 'datadog.host.use_as_metadata' boolean attribute is set to 'true'.
+
+- `resourcedetectionprocessor`: The system detector now detects the `host.arch` semantic convention (#22939)
+ The GOARCH value is used on architectures that are not well-known
+- `pkg/ottl`: Improve error reporting for errors during statement parsing (#23840)
+ - Failures are now printed for all statements within a context, and the statements are printed next to the errors.
+ - Erroneous values found during parsing are now quoted in error logs.
+
+- `prometheusremotewrite`: improve the latency and memory utilisation of the conversion from OpenTelemetry to Prometheus remote write (#24288)
+- `prometheusremotewriteexporter, prometheusexporter`: Add `add_metric_suffixes` configuration option, which can disable the addition of type and unit suffixes. (#21743, #8950)
+- `pkg/translator/prometheusremotewrite`: Downscale exponential histograms to fit prometheus native histograms if necessary. (#17565)
+- `routingprocessor`: Enables processor to extract metadata from client.Info (#20913)
+ Enables processor to perform context based routing for payloads on the http server of otlp receiver
+- `processor/transform`: Report all errors from parsing OTTL statements (#24245)
+- `azuremonitorexporter`: Map enduser.id to Azure UserID tag (#18103)
+
+### 🧰 Bug fixes 🧰
+
+- `datasetexporter`: Call the correct library function then exporter is shutting down. (#24253)
+- `awscloudwatchreceiver`: emit logs from one log stream in the same resource (#22145)
+- `resourcedetectionprocessor`: avoid returning empty host.id on the `system` detector (#24230)
+- `ecsobserver`: Don't fail with error when finding a task of EC2 launch type and missing container instance, just ignore them. This fixes behavior when task is provisioning and its containers are not assigned to instances yet. (#23279)
+- `filelogreceiver`: Fix file sort timestamp validation (#24041)
+- `lokitranslator`: Fix bug when attributes targeted in slice hint not converted to labels when log record has severity_number (#22038)
+- `prometheusreceiver`: Don't fail the whole scrape on invalid data (#24030)
+- `datadogreceiver`: Include datadog span.Resource in translated span attributes (#23150)
+- `cmd/telemetrygen`: Move the span attribute span.kind to the native Kind which is a top level trace information (#24286)
+- `pkg/stanza`: Fix issue where nil body would be converted to string (#24017)
+- `pkg/stanza`: Fix issue where syslog input ignored enable_octet_counting setting (#24073)
+- `processor/resourcedetection`: Fix docker detector not setting any attributes. (#24280)
+- `processor/resourcedetection`: Fix Heroku config option for the `service.name` and `service.version` attributes (#24355)
+ `service.name` and `service.version` attributes were mistakenly controlled by `heroku.app.name` and
+ `heroku.release.version` options under `resource_attributes` configuration introduced in 0.81.0.
+ This PR fixes the issue by using the correct config options named the same as the attributes.
+- `processor/resourcedetection`: make sure to use a aks config struct instead of nil to avoid collector panic (#24549)
+- `filelogreceiver`: Fix issue where files were deduplicated unnecessarily (#24235)
+- `processor/tailsamplingprocessor`: Fix data race when accessing spans during policies evaluation (#24283)
+- `zipkintranslator`: Stop dropping error tags from Zipkin spans. The old code removes all errors from those spans, rendering them useless if an actual error happened. In addition, no longer delete error tags if they contain useful information. (#16530)
+
## v0.81.0
### 🛑 Breaking changes 🛑
@@ -3191,7 +4391,7 @@ https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/9278.
- [`cumulativetodelta` processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/cumulativetodeltaprocessor) to convert cumulative sum metrics to cumulative delta
- [`file` exporter](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/fileexporter) from core repository ([#3474](https://github.com/open-telemetry/opentelemetry-collector/issues/3474))
-- [`jaeger` exporter](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/jaegerexporter) from core repository ([#3474](https://github.com/open-telemetry/opentelemetry-collector/issues/3474))
+- [`jaeger` exporter](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/v0.85.0/exporter/jaegerexporter) from core repository ([#3474](https://github.com/open-telemetry/opentelemetry-collector/issues/3474))
- [`kafka` exporter](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/kafkaexporter) from core repository ([#3474](https://github.com/open-telemetry/opentelemetry-collector/issues/3474))
- [`opencensus` exporter](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/opencensusexporter) from core repository ([#3474](https://github.com/open-telemetry/opentelemetry-collector/issues/3474))
- [`prometheus` exporter](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/prometheusexporter) from core repository ([#3474](https://github.com/open-telemetry/opentelemetry-collector/issues/3474))
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index c0259ce9de9a..7a7efcb7d413 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -13,13 +13,22 @@ change. For instance:
## Changelog
+### Overview
+
+There are two Changelogs for this repository:
+
+- `CHANGELOG.md` is intended for users of the collector and lists changes that affect the behavior of the collector.
+- `CHANGELOG-API.md` is intended for developers who are importing packages from the collector codebase.
+
+### When to add a Changelog Entry
+
Pull requests that contain user-facing changes will require a changelog entry. Keep in mind the following types of users:
1. Those who are consuming the telemetry exported from the collector
2. Those who are deploying or otherwise managing the collector or its configuration
3. Those who are depending on APIs exported from collector packages
4. Those who are contributing to the repository
-The changelog is primarily directed at the first three groups but it is sometimes appropriate to include important updates relevant only to the forth group.
+Changes that affect the first two groups should be noted in `CHANGELOG.md`. Changes that affect the third or forth groups should be noted in `CHANGELOG-API.md`.
If a changelog entry is not required, a maintainer or approver will add the `Skip Changelog` label to the pull request.
@@ -45,11 +54,11 @@ No changelog entry:
### Adding a Changelog Entry
-The [CHANGELOG.md](./CHANGELOG.md) file in this repo is autogenerated from `.yaml` files in the `./.chloggen` directory.
+The [CHANGELOG.md](./CHANGELOG.md) and [CHANGELOG-API.md](./CHANGELOG-API.md) files in this repo is autogenerated from `.yaml` files in the `./.chloggen` directory.
Your pull-request should add a new `.yaml` file to this directory. The name of your file must be unique since the last release.
-During the collector release process, all `./.chloggen/*.yaml` files are transcribed into `CHANGELOG.md` and then deleted.
+During the collector release process, all `./chloggen/*.yaml` files are transcribed into `CHANGELOG.md` and `CHANGELOG-API.md` and then deleted.
**Recommended Steps**
1. Create an entry file using `make chlog-new`. This generates a file based on your current branch (e.g. `./.chloggen/my-branch.yaml`)
@@ -92,7 +101,7 @@ With above guidelines, you can write code that is more portable and easier to ma
## Adding New Components
**Before** any code is written, [open an
-issue](https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/new?assignees=&labels=new+component&template=new_component.md&title=New%20component)
+issue](https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/new?assignees=&labels=Sponsor+Needed%2Cneeds+triage&projects=&template=new_component.yaml&title=New+component%3A+)
providing the following information:
* Who's the sponsor for your component. A sponsor is an approver who will be in charge of being the official reviewer of
@@ -102,8 +111,8 @@ providing the following information:
components, having a sponsor means that your use case has been validated.
* Some information about your component, such as the reasoning behind it, use-cases, telemetry data types supported, and
anything else you think is relevant for us to make a decision about accepting the component.
-* The configuration options your component will accept. This will help us understand what it does and have an idea of
- how the implementation might look like.
+* The configuration options your component will accept. This will give us a better understanding of what it does, and
+ how it may be implemented.
Components refer to connectors, exporters, extensions, processors, and receivers. The key criteria to implementing a component is to:
@@ -112,7 +121,7 @@ Components refer to connectors, exporters, extensions, processors, and receivers
* Provide the implementation which performs the component operation
* Have a `metadata.yaml` file and its generated code (using [mdatadgen](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/cmd/mdatagen/README.md)).
-Familiarize yourself with the interface of the component that you want to write, and use existing implementations as reference.
+Familiarize yourself with the interface of the component that you want to write, and use existing implementations as a reference.
[Building a Trace Receiver](https://opentelemetry.io/docs/collector/trace-receiver/) tutorial provides a detailed example of building a component.
*NOTICE:* The Collector is in Beta stage and as such the interfaces may undergo breaking changes. Component creators
@@ -121,8 +130,8 @@ excluded from the default builds.
Generally, maintenance of components is the responsibility of contributors who authored them. If the original author or
some other contributor does not maintain the component it may be excluded from the default build. The component **will**
-be excluded if it causes build problems, has failing tests or otherwise causes problems to the rest of the repository
-and the rest of contributors.
+be excluded if it causes build problems, has failing tests, or otherwise causes problems to the rest of the repository
+and its contributors.
- Create your component under the proper folder and use Go standard package naming recommendations.
- Use a boiler-plate Makefile that just references the one at top level, ie.: `include ../../Makefile.Common` - this
@@ -143,10 +152,36 @@ and the rest of contributors.
and in the respective testing harnesses. To align with the test goal of the project, components must be testable within the framework defined within
the folder. If a component can not be properly tested within the existing framework, it must increase the non testable
components number with a comment within the PR explaining as to why it can not be tested.
-- Add the sponsor for your component and yourself to a new line for your component in the
- [`.github/CODEOWNERS`](./.github/CODEOWNERS) file.
+- Create a `metadata.yaml` file with at minimum the required fields defined in [metadata-schema.yaml](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/cmd/mdatagen/metadata-schema.yaml).
+Here is a minimal representation:
+```
+type:
+
+status:
+ class:
+ stability:
+ development: []
+ codeowners:
+ active: [, ]
+```
- Run `make generate-gh-issue-templates` to add your component to the dropdown list in the issue templates.
-- Create a `metadata.yaml` file with at minimum the required fields defined in [metadata-schema.yaml](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/cmd/mdatagen/metadata-schema.yaml) and use the [metadata generator](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/cmd/mdatagen/README.md#using-the-metadata-generator) to generate the associated code/documentation.
+- For README.md, you can start with the following:
+```
+#
+
+
+```
+- Create a `doc.go` file with a generate pragma. For a `fooreceiver`, the file will look like:
+```
+// Copyright The OpenTelemetry Authors
+// SPDX-License-Identifier: Apache-2.0
+
+//go:generate mdatagen metadata.yaml
+
+// Package fooreceiver bars.
+package fooreceiver // import "github.com/open-telemetry/opentelemetry-collector-contrib/receiver/fooreceiver"
+```
+- Type `make update-codeowners`. This will trigger the regeneration of the `.github/CODEOWNERS` file and the [metadata generator](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/cmd/mdatagen/README.md#using-the-metadata-generator) to generate the associated code/documentation.
When submitting a component to the community, consider breaking it down into separate PRs as follows:
@@ -156,6 +191,18 @@ When submitting a component to the community, consider breaking it down into sep
* This PR is usually trivial to review, so the size limit does not apply to
it.
* The component should use [`In Development` Stability](https://github.com/open-telemetry/opentelemetry-collector#development) in its README.
+ * Before submitting a PR, run the following commands from the root of the repository to ensure your new component is meeting the repo linting expectations:
+ * `make checkdoc`
+ * `make checkmetadata`
+ * `make checkapi`
+ * `make goporto`
+ * `make crosslink`
+ * `make gotidy`
+ * `make genotelcontribcol`
+ * `make genoteltestbedcol`
+ * `make generate`
+ * `make multimod-verify`
+ * `make generate-gh-issue-templates`
* **Second PR** should include the concrete implementation of the component. If the
size of this PR is larger than the recommended size consider splitting it in
multiple PRs.
@@ -174,8 +221,6 @@ to be included in the distributed otelcol-contrib binaries and docker images.
The following GitHub users are the currently available sponsors, either by being an approver or a maintainer of the contrib repository. The list is ordered based on a random sort of the list of sponsors done live at the Collector SIG meeting on 27-Apr-2022 and serves as the seed for the round-robin selection of sponsors, as described in the section above.
-* [@dashpole](https://github.com/dashpole)
-* [@TylerHelmuth](https://github.com/TylerHelmuth)
* [@djaglowski](https://github.com/djaglowski)
* [@codeboten](https://github.com/codeboten)
* [@Aneurysm9](https://github.com/Aneurysm9)
@@ -185,6 +230,13 @@ The following GitHub users are the currently available sponsors, either by being
* [@MovieStoreGuy](https://github.com/MovieStoreGuy)
* [@bogdandrutu](https://github.com/bogdandrutu)
* [@jpkrohling](https://github.com/jpkrohling)
+* [@dashpole](https://github.com/dashpole)
+* [@TylerHelmuth](https://github.com/TylerHelmuth)
+* [@fatsheep9146](https://github.com/fatsheep9146)
+* [@astencel-sumo](https://github.com/astencel-sumo)
+* [@songy23](https://github.com/songy23)
+* [@Bryan Aguilar](https://github.com/bryan-aguilar)
+* [@atoulme](https://github.com/atoulme)
Whenever a sponsor is picked from the top of this list, please move them to the bottom.
@@ -265,24 +317,25 @@ triaged and is ready for work. If someone who is assigned to an issue is no long
### Label Definitions
-| Label | When to apply |
-| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| `bug` | Something that is advertised or intended to work isn't working as expected. |
-| `enhancement` | Something that isn't an advertised feature that would be useful to users or maintainers. |
-| `flaky test` | A test unexpectedly failed during CI, showing that there is a problem with the tests or test setup that is causing the tests to intermittently fail. |
-| `good first issue` | Implementing this issue would not require specialized or in-depth knowledge about the component and is ideal for a new or first-time contributor to take. |
-| `help wanted` | The code owners for this component do not expect to have time to work on it soon, and would welcome help from contributors. |
-| `discussion needed` | This issue needs more input from the maintainers or community before work can be started. |
-| `needs triage` | This label is added automatically, and can be removed when a triager or code owner deems that an issue is either ready for work or should not need any work. |
-| `waiting for author` | Can be applied when input is required from the author before the issue can move any further. |
-| `priority:p0` | A critical security vulnerability or Collector panic using a default or common configuration unrelated to a specific component. |
-| `priority:p1` | An urgent issue that should be worked on quickly, before most other issues. |
-| `priority:p2` | A standard bug or enhancement. |
-| `priority:p3` | A technical improvement, lower priority bug, or other minor issue. Generally something that is considered a "nice to have." |
-| `release:blocker` | This issue must be resolved before the next Collector version can be released. |
-| `Sponsor Needed` | A new component has been proposed, but implementation is not ready to begin. This can be because a sponsor has not yet been decided, or because some details on the component still need to be decided. |
-| `Accepted Component` | A sponsor has elected to take on a component and implementation is ready to begin. |
-| `Vendor Specific Component` | This should be applied to any component proposal where the functionality for the component is particular to a vendor. |
+| Label | When to apply |
+| -------------------- |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `bug` | Something that is advertised or intended to work isn't working as expected. |
+| `enhancement` | Something that isn't an advertised feature that would be useful to users or maintainers. |
+| `flaky test` | A test unexpectedly failed during CI, showing that there is a problem with the tests or test setup that is causing the tests to intermittently fail. |
+| `documentation` | This is a collector usability issue that could likely be resolved by providing relevant documentation. Please consider adding new or improving existing documentation before closing issues with this label. |
+| `good first issue` | Implementing this issue would not require specialized or in-depth knowledge about the component and is ideal for a new or first-time contributor to take. |
+| `help wanted` | The code owners for this component do not expect to have time to work on it soon, and would welcome help from contributors. |
+| `discussion needed` | This issue needs more input from the maintainers or community before work can be started. |
+| `needs triage` | This label is added automatically, and can be removed when a triager or code owner deems that an issue is either ready for work or should not need any work. See also the [triaging process](#triage-process). |
+| `waiting for author` | Can be applied when input is required from the author before the issue can move any further. |
+| `priority:p0` | A critical security vulnerability or Collector panic using a default or common configuration unrelated to a specific component. |
+| `priority:p1` | An urgent issue that should be worked on quickly, before most other issues. |
+| `priority:p2` | A standard bug or enhancement. |
+| `priority:p3` | A technical improvement, lower priority bug, or other minor issue. Generally something that is considered a "nice to have." |
+| `release:blocker` | This issue must be resolved before the next Collector version can be released. |
+| `Sponsor Needed` | A new component has been proposed, but implementation is not ready to begin. This can be because a sponsor has not yet been decided, or because some details on the component still need to be decided. |
+| `Accepted Component` | A sponsor has elected to take on a component and implementation is ready to begin. |
+| `Vendor Specific Component` | This should be applied to any component proposal where the functionality for the component is particular to a vendor. |
### Adding Labels via Comments
@@ -308,13 +361,18 @@ Example label comment:
## Becoming a Code Owner
-A Code Owner is responsible for a component within Collector Contrib, as indicated by the [CODEOWNERS file](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/.github/CODEOWNERS). That responsibility includes maintaining the component, responding to issues, and reviewing pull requests.
+A Code Owner is responsible for a component within Collector Contrib, as indicated by the [CODEOWNERS file](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/.github/CODEOWNERS). That responsibility includes maintaining the component, triaging and responding to issues, and reviewing pull requests.
Sometimes a component may be in need of a new or additional Code Owner. A few reasons this situation may arise would be:
-- The component was never assigned a Code Owner.
+
+- The existing Code Owners are actively looking for more help.
- A previous Code Owner stepped down.
- An existing Code Owner has become unresponsive. See [unmaintained stability status](https://github.com/open-telemetry/opentelemetry-collector#unmaintained).
-- The existing Code Owners are actively looking for new Code Owners to help.
+- The component was never assigned a Code Owner.
+
+Code Ownership does not have to be a full-time job. If you can find a couple hours to help out on a recurring basis, please consider pursuing Code Ownership.
+
+### Requirements
If you would like to help and become a Code Owner you must meet the following requirements:
@@ -323,9 +381,18 @@ If you would like to help and become a Code Owner you must meet the following re
Code Ownership is ultimately up to the judgement of the existing Code Owners and Collector Contrib Maintainers. Meeting the above requirements is not a guarantee to be granted Code Ownership.
-To become a Code Owner, open a PR with the CODEOWNERS file modified, adding your GitHub username to the component's row. Be sure to tag the existing Code Owners, if any, within the PR to ensure they receive a notification.
+### How to become a Code Owner
+
+To become a Code Owner, open a PR with the following changes:
+
+1. Add your GitHub username to the active codeowners entry in the component's `metadata.yaml` file.
+2. Run the command `make update-codeowners`.
+ * Note: A GitHub [personal access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens) must be configured for this command to work.
+ * If this command is unsuccessful, manually update the component's row in the [CODEOWNERS](.github/CODEOWNERS) file, and then run `make generate` to regenerate the component's README header.
+
+Be sure to tag the existing Code Owners, if any, within the PR to ensure they receive a notification.
-### Makefile Guidelines
+## Makefile Guidelines
When adding or modifying the `Makefile`'s in this repository, consider the following design guidelines.
@@ -334,7 +401,7 @@ The [Makefile](./Makefile) SHOULD contain "repo-level" targets. (i.e. targets th
Likewise, `Makefile.Common` SHOULD contain "module-level" targets. (i.e. targets that apply to one module at a time.)
Each module should have a `Makefile` at its root that includes `Makefile.Common`.
-#### Module-level targets
+### Module-level targets
Module-level targets SHOULD NOT act on nested modules. For example, running `make lint` at the root of the repo will
*only* evaluate code that is part of the `go.opentelemetry.io/collector` module. This excludes nested modules such as
@@ -344,7 +411,7 @@ Each module-level target SHOULD have a corresponding repo-level target. For exam
in each module. In this way, the entire repository is covered. The root `Makefile` contains some "for each module" targets
that can wrap a module-level target into a repo-level target.
-#### Repo-level targets
+### Repo-level targets
Whenever reasonable, targets SHOULD be implemented as module-level targets (and wrapped with a repo-level target).
However, there are many valid justifications for implementing a standalone repo-level target.
@@ -354,7 +421,7 @@ However, there are many valid justifications for implementing a standalone repo-
3. A necessary tool does not provide a mechanism for scoping its application. (e.g. `porto` cannot be limited to a specific module.)
4. The "for each module" pattern would result in incomplete coverage of the codebase. (e.g. A target that scans all file, not just `.go` files.)
-#### Default targets
+### Default targets
The default module-level target (i.e. running `make` in the context of an individual module), should run a substantial set of module-level
targets for an individual module. Ideally, this would include *all* module-level targets, but exceptions should be made if a particular
diff --git a/Makefile b/Makefile
index e08f9c74e5fc..bf238d3b9404 100644
--- a/Makefile
+++ b/Makefile
@@ -3,12 +3,9 @@ include ./Makefile.Common
RUN_CONFIG?=local/config.yaml
CMD?=
OTEL_VERSION=main
-OTEL_RC_VERSION=main
OTEL_STABLE_VERSION=main
-BUILD_INFO_IMPORT_PATH=github.com/open-telemetry/opentelemetry-collector-contrib/internal/otelcontribcore/internal/version
VERSION=$(shell git describe --always --match "v[0-9]*" HEAD)
-BUILD_INFO=-ldflags "-X $(BUILD_INFO_IMPORT_PATH).Version=$(VERSION)"
COMP_REL_PATH=cmd/otelcontribcol/components.go
MOD_NAME=github.com/open-telemetry/opentelemetry-collector-contrib
@@ -21,23 +18,30 @@ TO_MOD_DIR=dirname {} \; | sort | grep -E '^./'
EX_COMPONENTS=-not -path "./receiver/*" -not -path "./processor/*" -not -path "./exporter/*" -not -path "./extension/*" -not -path "./connector/*"
EX_INTERNAL=-not -path "./internal/*"
EX_PKG=-not -path "./pkg/*"
+EX_CMD=-not -path "./cmd/*"
# NONROOT_MODS includes ./* dirs (excludes . dir)
NONROOT_MODS := $(shell find . $(FIND_MOD_ARGS) -exec $(TO_MOD_DIR) )
-RECEIVER_MODS_0 := $(shell find ./receiver/[a-k]* $(FIND_MOD_ARGS) -exec $(TO_MOD_DIR) )
-RECEIVER_MODS_1 := $(shell find ./receiver/[l-z]* $(FIND_MOD_ARGS) -exec $(TO_MOD_DIR) )
-RECEIVER_MODS := $(RECEIVER_MODS_0) $(RECEIVER_MODS_1)
+RECEIVER_MODS_0 := $(shell find ./receiver/[a-f]* $(FIND_MOD_ARGS) -exec $(TO_MOD_DIR) )
+RECEIVER_MODS_1 := $(shell find ./receiver/[g-o]* $(FIND_MOD_ARGS) -exec $(TO_MOD_DIR) )
+RECEIVER_MODS_2 := $(shell find ./receiver/[p]* $(FIND_MOD_ARGS) -exec $(TO_MOD_DIR) ) # Prometheus is special and gets its own section.
+RECEIVER_MODS_3 := $(shell find ./receiver/[q-z]* $(FIND_MOD_ARGS) -exec $(TO_MOD_DIR) )
+RECEIVER_MODS := $(RECEIVER_MODS_0) $(RECEIVER_MODS_1) $(RECEIVER_MODS_2) $(RECEIVER_MODS_3)
PROCESSOR_MODS := $(shell find ./processor/* $(FIND_MOD_ARGS) -exec $(TO_MOD_DIR) )
-EXPORTER_MODS := $(shell find ./exporter/* $(FIND_MOD_ARGS) -exec $(TO_MOD_DIR) )
+EXPORTER_MODS_0 := $(shell find ./exporter/[a-m]* $(FIND_MOD_ARGS) -exec $(TO_MOD_DIR) )
+EXPORTER_MODS_1 := $(shell find ./exporter/[n-z]* $(FIND_MOD_ARGS) -exec $(TO_MOD_DIR) )
+EXPORTER_MODS := $(EXPORTER_MODS_0) $(EXPORTER_MODS_1)
EXTENSION_MODS := $(shell find ./extension/* $(FIND_MOD_ARGS) -exec $(TO_MOD_DIR) )
CONNECTOR_MODS := $(shell find ./connector/* $(FIND_MOD_ARGS) -exec $(TO_MOD_DIR) )
INTERNAL_MODS := $(shell find ./internal/* $(FIND_MOD_ARGS) -exec $(TO_MOD_DIR) )
PKG_MODS := $(shell find ./pkg/* $(FIND_MOD_ARGS) -exec $(TO_MOD_DIR) )
-OTHER_MODS := $(shell find . $(EX_COMPONENTS) $(EX_INTERNAL) $(EX_PKG) $(FIND_MOD_ARGS) -exec $(TO_MOD_DIR) ) $(PWD)
-ALL_MODS := $(RECEIVER_MODS) $(PROCESSOR_MODS) $(EXPORTER_MODS) $(EXTENSION_MODS) $(CONNECTOR_MODS) $(INTERNAL_MODS) $(PKG_MODS) $(OTHER_MODS)
+CMD_MODS_0 := $(shell find ./cmd/[a-m]* $(FIND_MOD_ARGS) -exec $(TO_MOD_DIR) )
+CMD_MODS_1 := $(shell find ./cmd/[n-z]* $(FIND_MOD_ARGS) -not -path "./cmd/otelcontribcol/*" -exec $(TO_MOD_DIR) )
+CMD_MODS := $(CMD_MODS_0) $(CMD_MODS_1)
+OTHER_MODS := $(shell find . $(EX_COMPONENTS) $(EX_INTERNAL) $(EX_PKG) $(EX_CMD) $(FIND_MOD_ARGS) -exec $(TO_MOD_DIR) ) $(PWD)
+ALL_MODS := $(RECEIVER_MODS) $(PROCESSOR_MODS) $(EXPORTER_MODS) $(EXTENSION_MODS) $(CONNECTOR_MODS) $(INTERNAL_MODS) $(PKG_MODS) $(CMD_MODS) $(OTHER_MODS)
-# find -exec dirname cannot be used to process multiple matching patterns
FIND_INTEGRATION_TEST_MODS={ find . -type f -name "*integration_test.go" & find . -type f -name "*e2e_test.go" -not -path "./testbed/*"; }
INTEGRATION_MODS := $(shell $(FIND_INTEGRATION_TEST_MODS) | xargs $(TO_MOD_DIR) | uniq)
@@ -53,13 +57,18 @@ all-modules:
all-groups:
@echo "receiver-0: $(RECEIVER_MODS_0)"
@echo "\nreceiver-1: $(RECEIVER_MODS_1)"
+ @echo "\nreceiver-2: $(RECEIVER_MODS_2)"
+ @echo "\nreceiver-3: $(RECEIVER_MODS_3)"
@echo "\nreceiver: $(RECEIVER_MODS)"
@echo "\nprocessor: $(PROCESSOR_MODS)"
- @echo "\nexporter: $(EXPORTER_MODS)"
+ @echo "\nexporter-0: $(EXPORTER_MODS_0)"
+ @echo "\nexporter-1: $(EXPORTER_MODS_1)"
@echo "\nextension: $(EXTENSION_MODS)"
@echo "\nconnector: $(CONNECTOR_MODS)"
@echo "\ninternal: $(INTERNAL_MODS)"
@echo "\npkg: $(PKG_MODS)"
+ @echo "\ncmd-0: $(CMD_MODS_0)"
+ @echo "\ncmd-1: $(CMD_MODS_1)"
@echo "\nother: $(OTHER_MODS)"
.PHONY: all
@@ -71,7 +80,7 @@ all-common:
.PHONY: e2e-test
e2e-test: otelcontribcol oteltestbedcol
- $(MAKE) -C testbed run-tests
+ $(MAKE) --no-print-directory -C testbed run-tests
.PHONY: integration-test
integration-test:
@@ -87,6 +96,10 @@ stability-tests: otelcontribcol
@echo Stability tests are disabled until we have a stable performance environment.
@echo To enable the tests replace this echo by $(MAKE) -C testbed run-stability-tests
+.PHONY: gogci
+gogci:
+ $(MAKE) $(FOR_GROUP_TARGET) TARGET="gci"
+
.PHONY: gotidy
gotidy:
$(MAKE) $(FOR_GROUP_TARGET) TARGET="tidy"
@@ -104,6 +117,10 @@ gotest-with-cover:
@$(MAKE) $(FOR_GROUP_TARGET) TARGET="test-with-cover"
$(GOCMD) tool covdata textfmt -i=./coverage/unit -o ./$(GROUP)-coverage.txt
+.PHONY: gointegration-test
+gointegration-test:
+ $(MAKE) $(FOR_GROUP_TARGET) TARGET="mod-integration-test"
+
.PHONY: gofmt
gofmt:
$(MAKE) $(FOR_GROUP_TARGET) TARGET="fmt"
@@ -134,46 +151,18 @@ COMMIT?=HEAD
MODSET?=contrib-core
REMOTE?=git@github.com:open-telemetry/opentelemetry-collector-contrib.git
.PHONY: push-tags
-push-tags: $(MULITMOD)
- $(MULITMOD) verify
- set -e; for tag in `$(MULITMOD) tag -m ${MODSET} -c ${COMMIT} --print-tags | grep -v "Using" `; do \
+push-tags: $(MULTIMOD)
+ $(MULTIMOD) verify
+ set -e; for tag in `$(MULTIMOD) tag -m ${MODSET} -c ${COMMIT} --print-tags | grep -v "Using" `; do \
echo "pushing tag $${tag}"; \
git push ${REMOTE} $${tag}; \
done;
-DEPENDABOT_PATH=".github/dependabot.yml"
-.PHONY: gendependabot
-gendependabot:
- @echo "Recreating ${DEPENDABOT_PATH} file"
- @echo "# File generated by \"make gendependabot\"; DO NOT EDIT." > ${DEPENDABOT_PATH}
- @echo "" >> ${DEPENDABOT_PATH}
- @echo "version: 2" >> ${DEPENDABOT_PATH}
- @echo "updates:" >> ${DEPENDABOT_PATH}
- @echo "Add entry for \"/\" gomod"
- @echo " - package-ecosystem: \"gomod\"" >> ${DEPENDABOT_PATH}
- @echo " directory: \"/\"" >> ${DEPENDABOT_PATH}
- @echo " schedule:" >> ${DEPENDABOT_PATH}
- @echo " interval: \"weekly\"" >> ${DEPENDABOT_PATH}
- @echo " day: \"wednesday\"" >> ${DEPENDABOT_PATH}
- @set -e; for dir in `echo $(NONROOT_MODS) | tr ' ' '\n' | head -n 219 | tr '\n' ' '`; do \
- echo "Add entry for \"$${dir:1}\""; \
- echo " - package-ecosystem: \"gomod\"" >> ${DEPENDABOT_PATH}; \
- echo " directory: \"$${dir:1}\"" >> ${DEPENDABOT_PATH}; \
- echo " schedule:" >> ${DEPENDABOT_PATH}; \
- echo " interval: \"weekly\"" >> ${DEPENDABOT_PATH}; \
- echo " day: \"wednesday\"" >> ${DEPENDABOT_PATH}; \
- done
- @echo "The following modules are not included in the dependabot file because it has a limit of 220 entries:"
- @set -e; for dir in `echo $(NONROOT_MODS) | tr ' ' '\n' | tail -n +220 | tr '\n' ' '`; do \
- echo " - $${dir:1}"; \
- done
-
-
# Define a delegation target for each module
.PHONY: $(ALL_MODS)
$(ALL_MODS):
@echo "Running target '$(TARGET)' in module '$@' as part of group '$(GROUP)'"
- $(MAKE) -C $@ $(TARGET)
+ $(MAKE) --no-print-directory -C $@ $(TARGET)
# Trigger each module's delegation target
.PHONY: for-all-target
@@ -188,12 +177,24 @@ for-receiver-0-target: $(RECEIVER_MODS_0)
.PHONY: for-receiver-1-target
for-receiver-1-target: $(RECEIVER_MODS_1)
+.PHONY: for-receiver-2-target
+for-receiver-2-target: $(RECEIVER_MODS_2)
+
+.PHONY: for-receiver-3-target
+for-receiver-3-target: $(RECEIVER_MODS_3)
+
.PHONY: for-processor-target
for-processor-target: $(PROCESSOR_MODS)
.PHONY: for-exporter-target
for-exporter-target: $(EXPORTER_MODS)
+.PHONY: for-exporter-0-target
+for-exporter-0-target: $(EXPORTER_MODS_0)
+
+.PHONY: for-exporter-1-target
+for-exporter-1-target: $(EXPORTER_MODS_1)
+
.PHONY: for-extension-target
for-extension-target: $(EXTENSION_MODS)
@@ -206,6 +207,15 @@ for-internal-target: $(INTERNAL_MODS)
.PHONY: for-pkg-target
for-pkg-target: $(PKG_MODS)
+.PHONY: for-cmd-target
+for-cmd-target: $(CMD_MODS)
+
+.PHONY: for-cmd-0-target
+for-cmd-0-target: $(CMD_MODS_0)
+
+.PHONY: for-cmd-1-target
+for-cmd-1-target: $(CMD_MODS_1)
+
.PHONY: for-other-target
for-other-target: $(OTHER_MODS)
@@ -240,10 +250,13 @@ docker-otelcontribcol:
.PHONY: docker-telemetrygen
docker-telemetrygen:
- COMPONENT=telemetrygen $(MAKE) docker-component
+ GOOS=linux GOARCH=$(GOARCH) $(MAKE) telemetrygen
+ cp bin/telemetrygen_* cmd/telemetrygen/
+ cd cmd/telemetrygen && docker build --platform linux/$(GOARCH) -t telemetrygen:latest .
+ rm cmd/telemetrygen/telemetrygen_*
.PHONY: generate
-generate:
+generate: install-tools
cd cmd/mdatagen && $(GOCMD) install .
$(MAKE) for-all CMD="$(GOCMD) generate ./..."
@@ -253,59 +266,73 @@ mdatagen-test:
cd cmd/mdatagen && $(GOCMD) generate ./...
cd cmd/mdatagen && $(GOCMD) test ./...
+.PHONY: githubgen-install
+githubgen-install:
+ cd cmd/githubgen && $(GOCMD) install .
+
+.PHONY: gengithub
+gengithub: githubgen-install
+ githubgen
+
+.PHONY: gendistributions
+gendistributions: githubgen-install
+ githubgen distributions
+
+.PHONY: update-codeowners
+update-codeowners: gengithub generate
+
FILENAME?=$(shell git branch --show-current)
.PHONY: chlog-new
chlog-new: $(CHLOGGEN)
- $(CHLOGGEN) new --filename $(FILENAME)
+ $(CHLOGGEN) new --config $(CHLOGGEN_CONFIG) --filename $(FILENAME)
.PHONY: chlog-validate
chlog-validate: $(CHLOGGEN)
- $(CHLOGGEN) validate
+ $(CHLOGGEN) validate --config $(CHLOGGEN_CONFIG)
.PHONY: chlog-preview
chlog-preview: $(CHLOGGEN)
- $(CHLOGGEN) update --dry
+ $(CHLOGGEN) update --config $(CHLOGGEN_CONFIG) --dry
.PHONY: chlog-update
chlog-update: $(CHLOGGEN)
- $(CHLOGGEN) update --version $(VERSION)
+ $(CHLOGGEN) update --config $(CHLOGGEN_CONFIG) --version $(VERSION)
.PHONY: genotelcontribcol
genotelcontribcol: $(BUILDER)
$(BUILDER) --skip-compilation --config cmd/otelcontribcol/builder-config.yaml --output-path cmd/otelcontribcol
- $(MAKE) -C cmd/otelcontribcol fmt
+ $(MAKE) --no-print-directory -C cmd/otelcontribcol fmt
# Build the Collector executable.
.PHONY: otelcontribcol
otelcontribcol:
cd ./cmd/otelcontribcol && GO111MODULE=on CGO_ENABLED=0 $(GOCMD) build -trimpath -o ../../bin/otelcontribcol_$(GOOS)_$(GOARCH)$(EXTENSION) \
- $(BUILD_INFO) -tags $(GO_BUILD_TAGS) .
+ -tags $(GO_BUILD_TAGS) .
.PHONY: genoteltestbedcol
genoteltestbedcol: $(BUILDER)
$(BUILDER) --skip-compilation --config cmd/oteltestbedcol/builder-config.yaml --output-path cmd/oteltestbedcol
- $(MAKE) -C cmd/oteltestbedcol fmt
+ $(MAKE) --no-print-directory -C cmd/oteltestbedcol fmt
# Build the Collector executable, with only components used in testbed.
.PHONY: oteltestbedcol
oteltestbedcol:
cd ./cmd/oteltestbedcol && GO111MODULE=on CGO_ENABLED=0 $(GOCMD) build -trimpath -o ../../bin/oteltestbedcol_$(GOOS)_$(GOARCH)$(EXTENSION) \
- $(BUILD_INFO) -tags $(GO_BUILD_TAGS) .
+ -tags $(GO_BUILD_TAGS) .
# Build the telemetrygen executable.
.PHONY: telemetrygen
telemetrygen:
cd ./cmd/telemetrygen && GO111MODULE=on CGO_ENABLED=0 $(GOCMD) build -trimpath -o ../../bin/telemetrygen_$(GOOS)_$(GOARCH)$(EXTENSION) \
- $(BUILD_INFO) -tags $(GO_BUILD_TAGS) .
-
-.PHONY: update-dep
-update-dep:
- $(MAKE) $(FOR_GROUP_TARGET) TARGET="updatedep"
- $(MAKE) otelcontribcol
+ -tags $(GO_BUILD_TAGS) .
.PHONY: update-otel
-update-otel:
- $(MAKE) update-dep MODULE=go.opentelemetry.io/collector VERSION=$(OTEL_VERSION) RC_VERSION=$(OTEL_RC_VERSION) STABLE_VERSION=$(OTEL_STABLE_VERSION)
+update-otel:$(MULTIMOD)
+ $(MULTIMOD) sync -s=true -o ../opentelemetry-collector -m stable --commit-hash $(OTEL_STABLE_VERSION)
+ git add . && git commit -s -m "[chore] multimod update stable modules"
+ $(MULTIMOD) sync -s=true -o ../opentelemetry-collector -m beta --commit-hash $(OTEL_VERSION)
+ git add . && git commit -s -m "[chore] multimod update beta modules"
+ $(MAKE) gotidy
.PHONY: otel-from-tree
otel-from-tree:
@@ -317,7 +344,7 @@ otel-from-tree:
# 2. Run `make otel-from-tree` (only need to run it once to remap go modules)
# 3. You can now build contrib and it will use your local otel core changes.
# 4. Before committing/pushing your contrib changes, undo by running `make otel-from-lib`.
- $(MAKE) for-all CMD="$(GOCMD) mod edit -replace go.opentelemetry.io/collector=$(SRC_ROOT)/../opentelemetry-collector"
+ $(MAKE) for-all CMD="$(GOCMD) mod edit -replace go.opentelemetry.io/collector=$(SRC_PARENT_DIR)/opentelemetry-collector"
.PHONY: otel-from-lib
otel-from-lib:
@@ -327,6 +354,7 @@ otel-from-lib:
.PHONY: build-examples
build-examples:
docker-compose -f examples/demo/docker-compose.yaml build
+ cd examples/secure-tracing/certs && $(MAKE) clean && $(MAKE) all && docker-compose -f ../docker-compose.yaml build
docker-compose -f exporter/splunkhecexporter/example/docker-compose.yml build
.PHONY: deb-rpm-package
@@ -338,8 +366,17 @@ build-examples:
# Verify existence of READMEs for components specified as default components in the collector.
.PHONY: checkdoc
-checkdoc: $(CHECKDOC)
- $(CHECKDOC) --project-path $(CURDIR) --component-rel-path $(COMP_REL_PATH) --module-name $(MOD_NAME)
+checkdoc: $(CHECKFILE)
+ $(CHECKFILE) --project-path $(CURDIR) --component-rel-path $(COMP_REL_PATH) --module-name $(MOD_NAME) --file-name "README.md"
+
+# Verify existence of metadata.yaml for components specified as default components in the collector.
+.PHONY: checkmetadata
+checkmetadata: $(CHECKFILE)
+ $(CHECKFILE) --project-path $(CURDIR) --component-rel-path $(COMP_REL_PATH) --module-name $(MOD_NAME) --file-name "metadata.yaml"
+
+.PHONY: checkapi
+checkapi:
+ $(GOCMD) run cmd/checkapi/main.go .
.PHONY: all-checklinks
all-checklinks:
@@ -367,18 +404,18 @@ certs:
$(foreach dir, $(CERT_DIRS), $(call exec-command, @internal/buildscripts/gen-certs.sh -o $(dir)))
.PHONY: multimod-verify
-multimod-verify: $(MULITMOD)
+multimod-verify: $(MULTIMOD)
@echo "Validating versions.yaml"
- $(MULITMOD) verify
+ $(MULTIMOD) verify
.PHONY: multimod-prerelease
-multimod-prerelease: $(MULITMOD)
- $(MULITMOD) prerelease -s=true -b=false -v ./versions.yaml -m contrib-base
+multimod-prerelease: $(MULTIMOD)
+ $(MULTIMOD) prerelease -s=true -b=false -v ./versions.yaml -m contrib-base
$(MAKE) gotidy
.PHONY: multimod-sync
-multimod-sync: $(MULITMOD)
- $(MULITMOD) sync -a=true -s=true -o ../opentelemetry-collector
+multimod-sync: $(MULTIMOD)
+ $(MULTIMOD) sync -a=true -s=true -o ../opentelemetry-collector
$(MAKE) gotidy
.PHONY: crosslink
@@ -401,10 +438,20 @@ genconfigdocs:
.PHONY: generate-gh-issue-templates
generate-gh-issue-templates:
- for FILE in bug_report feature_request other; do \
- YAML_FILE=".github/ISSUE_TEMPLATE/$${FILE}.yaml"; \
- TMP_FILE=".github/ISSUE_TEMPLATE/$${FILE}.yaml.tmp"; \
- cat "$${YAML_FILE}" > "$${TMP_FILE}"; \
- FILE="$${TMP_FILE}" ./.github/workflows/scripts/add-component-options.sh > "$${YAML_FILE}"; \
- rm "$${TMP_FILE}"; \
- done
+ cd cmd/githubgen && $(GOCMD) install .
+ githubgen issue-templates
+
+.PHONY: checks
+checks:
+ $(MAKE) checkdoc
+ $(MAKE) checkmetadata
+ $(MAKE) checkapi
+ $(MAKE) -j4 goporto
+ $(MAKE) crosslink
+ $(MAKE) -j4 gotidy
+ $(MAKE) genotelcontribcol
+ $(MAKE) genoteltestbedcol
+ $(MAKE) gendistributions
+ $(MAKE) -j4 generate
+ $(MAKE) multimod-verify
+ git diff --exit-code || (echo 'Some files need committing' && git status && exit 1)
diff --git a/Makefile.Common b/Makefile.Common
index a34214fb8d95..24acdf51aac5 100644
--- a/Makefile.Common
+++ b/Makefile.Common
@@ -3,24 +3,35 @@
# otherwise in the example command pipe, only the exit code of `tee` is recorded instead of `go test` which can cause
# test to pass in CI when they should not.
SHELL = /bin/bash
-ifeq ($(shell uname -s),Windows)
- .SHELLFLAGS = /o pipefile /c
+.SHELLFLAGS = -o pipefail -c
+
+SHELL_CASE_EXP = case "$$(uname -s)" in CYGWIN*|MINGW*|MSYS*) echo "true";; esac;
+UNIX_SHELL_ON_WINDOWS := $(shell $(SHELL_CASE_EXP))
+
+ifeq ($(UNIX_SHELL_ON_WINDOWS),true)
+ # The "sed" transformation below is needed on Windows, since commands like `go list -f '{{ .Dir }}'`
+ # return Windows paths and such paths are incompatible with other *nix tools, like `find`,
+ # used by the Makefile shell.
+ # The backslash needs to be doubled so its passed correctly to the shell.
+ NORMALIZE_DIRS = sed -e 's/^/\\//' -e 's/://' -e 's/\\\\/\\//g' | sort
else
- .SHELLFLAGS = -o pipefail -c
+ NORMALIZE_DIRS = sort
endif
# SRC_ROOT is the top of the source tree.
SRC_ROOT := $(shell git rev-parse --show-toplevel)
+# SRC_PARENT_DIR is the absolute path of source tree's parent directory
+SRC_PARENT_DIR := $(shell dirname $(SRC_ROOT))
# build tags required by any component should be defined as an independent variables and later added to GO_BUILD_TAGS below
GO_BUILD_TAGS=""
-GOTEST_OPT?= -race -timeout 300s -parallel 4 --tags=$(GO_BUILD_TAGS)
+GOTEST_TIMEOUT?= 300s
+GOTEST_OPT?= -race -timeout $(GOTEST_TIMEOUT) -parallel 4 --tags=$(GO_BUILD_TAGS)
GOTEST_INTEGRATION_OPT?= -race -timeout 360s -parallel 4
GOTEST_OPT_WITH_COVERAGE = $(GOTEST_OPT) -coverprofile=coverage.txt -covermode=atomic
GOTEST_OPT_WITH_INTEGRATION=$(GOTEST_INTEGRATION_OPT) -tags=integration,$(GO_BUILD_TAGS) -run=Integration
GOTEST_OPT_WITH_INTEGRATION_COVERAGE=$(GOTEST_OPT_WITH_INTEGRATION) -coverprofile=integration-coverage.txt -covermode=atomic
GOCMD?= go
-GOTEST=$(GOCMD) test
GOOS=$(shell $(GOCMD) env GOOS)
GOARCH=$(shell $(GOCMD) env GOARCH)
@@ -34,6 +45,7 @@ TOOLS_MOD_REGEX := "\s+_\s+\".*\""
TOOLS_PKG_NAMES := $(shell grep -E $(TOOLS_MOD_REGEX) < $(TOOLS_MOD_DIR)/tools.go | tr -d " _\"")
TOOLS_BIN_DIR := $(SRC_ROOT)/.tools
TOOLS_BIN_NAMES := $(addprefix $(TOOLS_BIN_DIR)/, $(notdir $(TOOLS_PKG_NAMES)))
+CHLOGGEN_CONFIG := .chloggen/config.yaml
.PHONY: install-tools
install-tools: $(TOOLS_BIN_NAMES)
@@ -49,34 +61,49 @@ MDLINKCHECK := $(TOOLS_BIN_DIR)/markdown-link-check
MISSPELL := $(TOOLS_BIN_DIR)/misspell -error
MISSPELL_CORRECTION := $(TOOLS_BIN_DIR)/misspell -w
LINT := $(TOOLS_BIN_DIR)/golangci-lint
-MULITMOD := $(TOOLS_BIN_DIR)/multimod
+MULTIMOD := $(TOOLS_BIN_DIR)/multimod
CHLOGGEN := $(TOOLS_BIN_DIR)/chloggen
GOIMPORTS := $(TOOLS_BIN_DIR)/goimports
PORTO := $(TOOLS_BIN_DIR)/porto
-CHECKDOC := $(TOOLS_BIN_DIR)/checkdoc
+CHECKFILE := $(TOOLS_BIN_DIR)/checkfile
CROSSLINK := $(TOOLS_BIN_DIR)/crosslink
GOJUNIT := $(TOOLS_BIN_DIR)/go-junit-report
BUILDER := $(TOOLS_BIN_DIR)/builder
GOVULNCHECK := $(TOOLS_BIN_DIR)/govulncheck
+GCI := $(TOOLS_BIN_DIR)/gci
+GOTESTSUM := $(TOOLS_BIN_DIR)/gotestsum
+
+GOTESTSUM_OPT?= --rerun-fails=1
# BUILD_TYPE should be one of (dev, release).
BUILD_TYPE?=release
-ALL_PKG_DIRS := $(shell $(GOCMD) list -f '{{ .Dir }}' ./... | sort)
+ALL_PKG_DIRS := $(shell $(GOCMD) list -f '{{ .Dir }}' ./... | $(NORMALIZE_DIRS))
ALL_SRC := $(shell find $(ALL_PKG_DIRS) -name '*.go' \
-not -path '*/third_party/*' \
-not -path '*/local/*' \
-type f | sort)
+ALL_SRC_AND_SHELL := find . -type f \( -iname '*.go' -o -iname "*.sh" \) ! -path '**/third_party/*' | sort
+
# All source code and documents. Used in spell check.
-ALL_SRC_AND_DOC := $(shell find $(ALL_PKG_DIRS) -name "*.md" -o -name "*.go" -o -name "*.yaml" \
- -not -path '*/third_party/*' \
- -type f | sort)
+ALL_SRC_AND_DOC_CMD := find $(ALL_PKG_DIRS) -name "*.md" -o -name "*.go" -o -name "*.yaml" -not -path '*/third_party/*' -type f | sort
+ifeq ($(UNIX_SHELL_ON_WINDOWS),true)
+ # Windows has a low limit, 8192 chars, to create a process. Workaround it by breaking it in smaller commands.
+ MISSPELL_CMD := $(ALL_SRC_AND_DOC_CMD) | xargs -n 20 $(MISSPELL)
+ MISSPELL_CORRECTION_CMD := $(ALL_SRC_AND_DOC_CMD) | xargs -n 20 $(MISSPELL_CORRECTION)
+else
+ ALL_SRC_AND_DOC := $(shell $(ALL_SRC_AND_DOC_CMD))
+ MISSPELL_CMD := $(MISSPELL) $(ALL_SRC_AND_DOC)
+ MISSPELL_CORRECTION_CMD := $(MISSPELL_CORRECTION) $(ALL_SRC_AND_DOC)
+endif
# ALL_PKGS is used with 'go cover'
ALL_PKGS := $(shell $(GOCMD) list $(sort $(dir $(ALL_SRC))))
+ADDLICENSE_CMD := $(ADDLICENSE) -s=only -y "" -c "The OpenTelemetry Authors"
+
pwd:
@pwd
@@ -95,61 +122,66 @@ all-pkg-dirs:
common: lint test
.PHONY: test
-test:
- $(GOTEST) $(GOTEST_OPT) ./...
+test: $(GOTESTSUM)
+ $(GOTESTSUM) $(GOTESTSUM_OPT) --packages="./..." -- $(GOTEST_OPT)
.PHONY: test-with-cover
-test-with-cover:
+test-with-cover: $(GOTESTSUM)
mkdir -p $(PWD)/coverage/unit
- $(GOTEST) $(GOTEST_OPT) -cover ./... -covermode=atomic -args -test.gocoverdir="$(PWD)/coverage/unit"
+ $(GOTESTSUM) $(GOTESTSUM_OPT) --packages="./..." -- $(GOTEST_OPT) -cover -covermode=atomic -args -test.gocoverdir="$(PWD)/coverage/unit"
.PHONY: do-unit-tests-with-cover
-do-unit-tests-with-cover:
+do-unit-tests-with-cover: $(GOTESTSUM)
@echo "running $(GOCMD) unit test ./... + coverage in `pwd`"
- $(GOTEST) $(GOTEST_OPT_WITH_COVERAGE) ./...
+ $(GOTESTSUM) $(GOTESTSUM_OPT) --packages="./..." -- $(GOTEST_OPT_WITH_COVERAGE)
$(GOCMD) tool cover -html=coverage.txt -o coverage.html
.PHONY: mod-integration-test
-mod-integration-test:
+mod-integration-test: $(GOTESTSUM)
@echo "running $(GOCMD) integration test ./... in `pwd`"
- $(GOTEST) $(GOTEST_OPT_WITH_INTEGRATION) ./...
+ $(GOTESTSUM) $(GOTESTSUM_OPT) --packages="./..." -- $(GOTEST_OPT_WITH_INTEGRATION)
@if [ -e integration-coverage.txt ]; then \
$(GOCMD) tool cover -html=integration-coverage.txt -o integration-coverage.html; \
fi
.PHONY: do-integration-tests-with-cover
-do-integration-tests-with-cover:
+do-integration-tests-with-cover: $(GOTESTSUM)
@echo "running $(GOCMD) integration test ./... + coverage in `pwd`"
- $(GOTEST) $(GOTEST_OPT_WITH_INTEGRATION_COVERAGE) ./...
+ $(GOTESTSUM) $(GOTESTSUM_OPT) --packages="./..." -- $(GOTEST_OPT_WITH_INTEGRATION_COVERAGE)
@if [ -e integration-coverage.txt ]; then \
$(GOCMD) tool cover -html=integration-coverage.txt -o integration-coverage.html; \
fi
.PHONY: benchmark
-benchmark:
- $(GOTEST) -bench=. -run=notests --tags=$(GO_BUILD_TAGS) $(ALL_PKGS)
+benchmark: $(GOTESTSUM)
+ $(GOTESTSUM) $(GOTESTSUM_OPT) --packages="$(ALL_PKGS)" -- -bench=. -run=notests --tags=$(GO_BUILD_TAGS)
.PHONY: addlicense
addlicense: $(ADDLICENSE)
- @ADDLICENSEOUT=`$(ADDLICENSE) -s=only -y "" -c 'The OpenTelemetry Authors' $(ALL_SRC) 2>&1`; \
- if [ "$$ADDLICENSEOUT" ]; then \
- echo "$(ADDLICENSE) FAILED => add License errors:\n"; \
- echo "$$ADDLICENSEOUT\n"; \
- exit 1; \
- else \
- echo "Add License finished successfully"; \
- fi
+ @ADDLICENSEOUT=$$(for f in $$($(ALL_SRC_AND_SHELL)); do \
+ `$(ADDLICENSE_CMD) "$$f" 2>&1`; \
+ done); \
+ if [ "$$ADDLICENSEOUT" ]; then \
+ echo "$(ADDLICENSE) FAILED => add License errors:\n"; \
+ echo "$$ADDLICENSEOUT\n"; \
+ exit 1; \
+ else \
+ echo "Add License finished successfully"; \
+ fi
.PHONY: checklicense
checklicense: $(ADDLICENSE)
- @licRes=$$(for f in $$(find . -type f \( -iname '*.go' -o -iname '*.sh' \) ! -path '**/third_party/*') ; do \
- awk '/Copyright The OpenTelemetry Authors|generated|GENERATED/ && NR<=3 { found=1; next } END { if (!found) print FILENAME }' $$f; \
- awk '/SPDX-License-Identifier: Apache-2.0|generated|GENERATED/ && NR<=4 { found=1; next } END { if (!found) print FILENAME }' $$f; \
- done); \
- if [ -n "$${licRes}" ]; then \
- echo "license header checking failed:"; echo "$${licRes}"; \
- exit 1; \
- fi
+ @licRes=$$(for f in $$($(ALL_SRC_AND_SHELL)); do \
+ awk '/Copyright The OpenTelemetry Authors|generated|GENERATED/ && NR<=3 { found=1; next } END { if (!found) print FILENAME }' $$f; \
+ awk '/SPDX-License-Identifier: Apache-2.0|generated|GENERATED/ && NR<=4 { found=1; next } END { if (!found) print FILENAME }' $$f; \
+ $(ADDLICENSE_CMD) -check "$$f" 2>&1 || echo "$$f"; \
+ done); \
+ if [ -n "$${licRes}" ]; then \
+ echo "license header checking failed:"; echo "$${licRes}"; \
+ exit 1; \
+ else \
+ echo "Check License finished successfully"; \
+ fi
.PHONY: checklinks
checklinks:
@@ -164,7 +196,7 @@ fmt: $(GOIMPORTS)
.PHONY: lint
lint: $(LINT) checklicense misspell
- $(LINT) run --allow-parallel-runners --verbose --build-tags integration --path-prefix $(shell basename "$(CURDIR)")
+ $(LINT) run --allow-parallel-runners --verbose --build-tags integration --timeout=30m --path-prefix $(shell basename "$(CURDIR)")
.PHONY: govulncheck
govulncheck: $(GOVULNCHECK)
@@ -173,22 +205,22 @@ govulncheck: $(GOVULNCHECK)
.PHONY: tidy
tidy:
rm -fr go.sum
- $(GOCMD) mod tidy -compat=1.19
+ $(GOCMD) mod tidy -compat=1.21
.PHONY: misspell
misspell: $(TOOLS_BIN_DIR)/misspell
@echo "running $(MISSPELL)"
- @$(MISSPELL) $(ALL_SRC_AND_DOC)
+ @$(MISSPELL_CMD)
.PHONY: misspell-correction
misspell-correction: $(TOOLS_BIN_DIR)/misspell
- $(MISSPELL_CORRECTION) $(ALL_SRC_AND_DOC)
+ $(MISSPELL_CORRECTION_CMD)
.PHONY: moddownload
moddownload:
$(GOCMD) mod download
-.PHONY: updatedep
-updatedep:
- $(PWD)/internal/buildscripts/update-dep
- @$(MAKE) tidy
+.PHONY: gci
+gci: $(TOOLS_BIN_DIR)/gci
+ @echo "running $(GCI)"
+ @$(GCI) write -s standard -s default -s "prefix(github.com/open-telemetry/opentelemetry-collector-contrib)" $(ALL_SRC_AND_DOC)
diff --git a/NOTICE b/NOTICE
new file mode 100644
index 000000000000..72a751368a46
--- /dev/null
+++ b/NOTICE
@@ -0,0 +1,27 @@
+receiver/hostmetricsreceiver/internal/scraper/processscraper/process.go contains code originating from gopsutil under internal/common/common.go.
+
+Copyright (c) 2014, WAKAYAMA Shirou
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without modification,
+are permitted provided that the following conditions are met:
+
+ * Redistributions of source code must retain the above copyright notice, this
+ list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright notice,
+ this list of conditions and the following disclaimer in the documentation
+ and/or other materials provided with the distribution.
+ * Neither the name of the gopsutil authors nor the names of its contributors
+ may be used to endorse or promote products derived from this software without
+ specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
+ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
+ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
\ No newline at end of file
diff --git a/README.md b/README.md
index cd97728f12d2..7b2ec3490aa2 100644
--- a/README.md
+++ b/README.md
@@ -34,11 +34,8 @@
•
Monitoring
•
- Performance
- •
Security
•
- Roadmap