Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow multiple branch/PR limit groups #10546

Open
mitchjust-finocomp opened this issue Jun 23, 2021 · 68 comments · May be fixed by #32556
Open

Allow multiple branch/PR limit groups #10546

mitchjust-finocomp opened this issue Jun 23, 2021 · 68 comments · May be fixed by #32556
Assignees
Labels
priority-2-high Bugs impacting wide number of users or very important features type:feature Feature (new functionality)

Comments

@mitchjust-finocomp
Copy link

What would you like Renovate to be able to do?

It would be great to be able to set a prConcurrentLimit per manager. It looks like there was a bit of discussion in renovatebot/config-help: renovatebot/config-help#563, which was closed without a solution.

eg. Ideally we could set a prConcurrentLimit like the following to allow up to 5 PRs each for both the gradle and npm manager.

  packageRules: [
        {
            matchManagers: ["gradle"],
            addLabels: ["java"],
            prConcurrentLimit: 5
        },
        {
            matchManagers: ["npm"],
            addLabels: ["javascript"],
            prConcurrentLimit: 5
        },
    ],
@mitchjust-finocomp mitchjust-finocomp added priority-5-triage status:requirements Full requirements are not yet known, so implementation should not be started type:feature Feature (new functionality) labels Jun 23, 2021
@mitchjust-finocomp
Copy link
Author

Failing this, is there some way to run renovate once per manager (ie. a separate gradle run and npm run), without it autoclosing the existing PRs which don't match the enabled managers?

@HonkingGoose
Copy link
Collaborator

According to our prConcurrentlimit docs you can only limit the amount of PRs open concurrently per repository:

This setting - if enabled - limits Renovate to a maximum of x concurrent PRs open at any time.

Note that this limit is enforced on a per-repository basis.

Can you explain why you need this new feature? Why do you want to limit PRs per package manager, instead of having one global limit? In what way is the global limit not working for you?


Maybe what you really want/need is a way to reduce the noisiness of Renovate updates, either by scheduling updates, grouping updates, or auto merging updates, or a combination of those strategies...

Take a look at these guides:

Do none of those strategies help with your problem?

@HonkingGoose
Copy link
Collaborator

I found some issues that seem similar/related to your issue:

@rarkins
Copy link
Collaborator

rarkins commented Jun 23, 2021

In your example you're expecting 5 for gradle plus 5 for npm, equaling 10 total?

@HonkingGoose
Copy link
Collaborator

In your example you're expecting 5 for gradle plus 5 for npm, equaling 10 total?T

That's the way I read it as well. 😄

  • 5 concurrent PRs for Gradle
  • 5 concurrent PRs for npm

10 concurrents PRs total.

But what do we do in case there are 3 package manager/files on the repo, and they only explicitly specified concurrent PR limits for npm and gradle, but not for say python. Use the default concurrent PR limit from config:base?

@mitchjust-finocomp
Copy link
Author

Can you explain why you need this new feature? Why do you want to limit PRs per package manager, instead of having one global limit? In what way is the global limit not working for you?

Maybe what you really want/need is a way to reduce the noisiness of Renovate updates, either by scheduling updates, grouping updates, or auto merging updates, or a combination of those strategies...

To clarify, I would expect 10 total PRs - 5 for gradle, 5 for npm.

In a project with a large number of dependencies where different parts of the codebase are maintained by different teams/collaborators (eg. a UI team looking after npm dependencies, a backend team looking after gradle dependencies), the renovate PRs can get swamped by one manager.

I agree that more proactively merging in dependencies would resolve this, but in reality (especially UI dependencies) they require thorough testing before merging. So for example if there is a backlog of npm dependencies, we won't see and gradle PRs until all the existing ones have been actioned.

Note that as a workaround I've found that by setting pruneStaleBranches: false I can run renovate multiple times, once with --enabled-managers=npm and once with --enabled-managers=gradle, to get the full number of PRs for each manager.

@viceice
Copy link
Member

viceice commented Jun 28, 2021

I think you should use dependency dashboard approval for your UI packages, so you can approve the number you like to handle.

@HonkingGoose
Copy link
Collaborator

I agree that more proactively merging in dependencies would resolve this, but in reality (especially UI dependencies) they require thorough testing before merging. So for example if there is a backlog of npm dependencies, we won't see and gradle PRs until all the existing ones have been actioned.

I think you should use dependency dashboard approval for your UI packages, so you can approve the number you like to handle.

I like the Dependendy Dashboard approval, but this doesn't let you know if you have any updates which passes test and are easy merges.

Maybe you just want Renovate to create PRs for all updates, not just a limited amount of them.

If you extend from the config:base preset, Renovate will open 20 PRs at a time:

{
  "extends": [
    ":prHourlyLimit2",
    ":prConcurrentLimit20",
  ]
}

https://docs.renovatebot.com/presets-config/#configbase

But you can tell Renovate to create all PRs by setting the prConcurrentLimit to 0, which in essence means that there's no limit on the amount of open PRs.

https://docs.renovatebot.com/configuration-options/#prconcurrentlimit

Would this help your problem?

@mitchjust-finocomp
Copy link
Author

But you can tell Renovate to create all PRs by setting the prConcurrentLimit to 0, which in essence means that there's no limit on the amount of open PRs.

https://docs.renovatebot.com/configuration-options/#prconcurrentlimit

Would this help your problem?

The only real problem I've hit with a high limit is the number of rebases (and subsequent builds) after each merge + renovate run

@viceice
Copy link
Member

viceice commented Jun 28, 2021

This only happens if you use auto merge or use rebaseWhen=behind-base-branch. If you leaf it on auto renovate will only rebase auto merge branches and conflicted branches. All other branches are left alone.

@HonkingGoose
Copy link
Collaborator

I recommended to set the prConcurrentLimit to 0 which essentially means unlimited concurrent PR, in a earlier comment:

But you can tell Renovate to create all PRs by setting the prConcurrentLimit to 0, which in essence means that there's no limit on the amount of open PRs.
https://docs.renovatebot.com/configuration-options/#prconcurrentlimit
Would this help your problem?

The only real problem I've hit with a high limit is the number of rebases (and subsequent builds) after each merge + renovate run

This is a bad idea... 🙈
I recently learned that setting prConcurrentLimit to 0 is really, really bad for the Renovate bot performance, quote from our documentation:

Warning: Leaving PRs/branches as unlimited or as a high number increases the time it takes for Renovate to process a repository.
If you find that Renovate is too slow when rebasing out-of-date branches, decrease the branchConcurrentLimit.

@gustaff-weldon
Copy link

gustaff-weldon commented Oct 4, 2021

Following discussion #11969 and suggestion by @HonkingGoose and @rarkins I will comment here.

The feature requested here will not solve our problem, but it could possibly be generalised.

We would like to have the ability to specify limits per folder/package file. Specifically, we are looking into replicating dependabot's feature:

updates:
  - package-ecosystem: npm
    directory: /
    open-pull-requests-limit: 20

  - package-ecosystem: bundler
    directory: /api-docs
    open-pull-requests-limit: 3

  - package-ecosystem: npm
    directory: /apollo-service-push
    open-pull-requests-limit: 3

Answering questions from @rarkins:

Which resources are you trying to protect?

We want to set the PR limit per subproject in monorepo. For us it is a folder which contains eg. package.json, gemfile, gradle.kts etc.) and make sure all projects are getting dependency checks and PRs (up to set limits).

How do existing limits help that

We are using prConcurrentLimit set up to 3 * number of FE projects, but does not work as we wish (as one of the projects gets pretty much all the PRs and others none)

What unfortunate side effects?

prConcurrentLimit is set per-repo, so it does not ensure every project will get an equal treatment, eg. one very noisy project can fill up PR limit before others are considered (yes, we do have grouping setup).

How could we satisfy both?
In a more generic way, perhaps a more abstracted solution where prLimit limit is set per group defined by a grouping aspect eg. (matchManager, matchPath, matchXYZ), similar to how packageRules work.

@rarkins
Copy link
Collaborator

rarkins commented Oct 5, 2021

@gustaff-weldon in other words, you want these prLimits to operate independently of each other?

One thing different about Renovate is that it allows very flexible grouping, meaning that we can't stop you defining some limits per path but then mixing them into shared PRs. Renovate also doesn't make you specify which folders to update or what type they are of course.

We may need a new limit concept like prLimitBucket which defaults to "default". Then you could e.g.

"packageRules": [{
  "matchPaths": [...],
  "prLimitBucket": "fe",
  "prConcurrentLimit": 3
}]

This could also potentially support templating so that you could assign every separate package its own limit, e.g.

"packageRules": [{
  "matchPackagePatterns": ["*"],
  "prLimitBucket": "{{parentDir}}",
  "prConcurrentLimit": 3
}]

@viceice @JamieMagee wdyt about this concept?

@viceice
Copy link
Member

viceice commented Oct 5, 2021

Sounds good to me.

@rarkins
Copy link
Collaborator

rarkins commented Oct 5, 2021

Apart from this concept we'd also need to change our logic so that we calculate limits per branch and not at the start of "for branch of branches" like we do today.

@gustaff-weldon
Copy link

@gustaff-weldon in other words, you want these prLimits to operate independently of each other?

Yes. I need granularity, ideally on packageRules level, but I'm happy to define pr limit rules separately.

One thing different about Renovate is that it allows very flexible grouping, meaning that we can't stop you defining some limits per path but then mixing them into shared PRs.

I'm aware of that, I'm also aware you could have groups that will overlap, but I would not worry about this tbh, if you check number of PRs per group that would be enough.

We may need a new limit concept like prLimitBucket which defaults to "default". Then you could e.g.

"packageRules": [{
  "matchPaths": [...],
  "prLimitBucket": "fe",
  "prConcurrentLimit": 3
}]

I would be fine even with packageRules.{}.prConcurrentLimit alone. What would you use prLimitBucket for?

Anyhow, any direction you'd take to allow granularity on PRs will be a win for me :)

Thanks for giving this a thought @rarkins!

@HonkingGoose HonkingGoose added priority-3-medium Default priority, "should be done" but isn't prioritised ahead of others and removed priority-5-triage labels Oct 18, 2021
@rarkins rarkins changed the title PR Limit Per Manager Allow multiple branch/PR limit groups May 28, 2022
@rarkins
Copy link
Collaborator

rarkins commented May 28, 2022

Revisiting this idea.

Limits can be configured at top level, and they apply to all branches, like we originally intended.

Limits can be defined in named groups and then applied, like so:

{
  "limitGroups": [{
    "limitName": "development dependencies",
    "prConcurrentLimit": 1
  }],
  "packageRules": [{
    "matchManagers": ["npm"],
    "matchDepTypes": ["devDependencies"],
    "applyLimitGroups": ["development dependencies"]
  }, {
    "matchManagers": ["bundler"],
    "matchDepTypes": ["test"],
    "applyLimitGroups": ["development dependencies"]
  }]
}

Limit groups can be implicitly defined in packageRules:

{
  "packageRules": [{
    "matchDepTypes": ["devDependencies", "dev"],
    "prConcurrentLimit": 1
  }]
}

Questions remaining:

  • Should we support multiple limits, in which case we apply all of them?
  • What if there's different limits applied within a single branch - apply all of them?
  • Should we let someone ever "reset" limits or should they always be mergeable?

@ns-ggeorgiev

This comment was marked as off-topic.

@viceice

This comment was marked as off-topic.

@rarkins
Copy link
Collaborator

rarkins commented Aug 13, 2022

We already have a prPriority field too

@ns-ggeorgiev

This comment was marked as off-topic.

@ns-ggeorgiev

This comment was marked as off-topic.

@rarkins
Copy link
Collaborator

rarkins commented Mar 6, 2023

  • What if there's different limits applied within a single branch - apply all of them?
  • Should we let someone ever "reset" limits or should they always be mergeable?

@Jared-Bottelsen
Copy link

+1
Having the ability to have separate concurrency limits based on package manager would be incredibly helpful for repos with both server and client directories. We have both NuGet and npm dependencies configured but since configuring for NuGet, the npm package update PRs have halted due to NuGet updates hogging the concurrency limit. If we could apply a separate limit per manager, this would allow for both server and client package updates to occur simultaneously under their own limits.

@philmtd
Copy link

philmtd commented Aug 8, 2023

We're also interested in this feature. We want to update our internal dependencies no matter what the limits for other dependencies are and no matter how many PRs are open.

I guess @rarkins is waiting for input regarding these questions. I'll give you my thoughts as a start:

What if there's different limits applied within a single branch - apply all of them?

I'd say apply all of them with the most specific one taking precedence over the others. Given a configuration like this:

{
  "prConcurrentLimit": 5,
  "limitGroups": [{
    "limitName": "internalDependencies",
    "prConcurrentLimit": 0
  }],
  "packageRules": [{
    "matchPackagePatterns": ["^com\\.my\\.company"],
    "applyLimitGroups": ["internalDependencies"]
  }
}

I'd expect that we have at most 5 open PRs of dependencies not matching the package rule with a potentially unlimited number of open PRs for matching dependencies.

Whether the limitGroup should still count to the global limit of five, I'm not sure. For our use case, I would not care, but I guess things would really get messy if a config contained many limitGroups.

Should we let someone ever "reset" limits or should they always be mergeable?

You mean overriding from a parent config? I think it should be possible to override the settings for a limitGroup or to change the limitGroup of a package.

@rarkins
Copy link
Collaborator

rarkins commented Aug 8, 2023

I'd say apply all of them with the most specific one taking precedence over the others

That to me sounds contradictory. Do you mean apply only the most specific instead of apply all of them?

@philmtd
Copy link

philmtd commented Aug 8, 2023

That to me sounds contradictory. Do you mean apply only the most specific instead of apply all of them?

Sorry, I think there's a confusion here. I think "apply the most specific one" for each package is what I want. So in the above example a com.my.company-package would only be restricted by the internalDependencies limitGroup, while a com.other.company-package would be restricted by the global limit.

I'm not sure about how the limitGroups should interact with each other or with the global limits. Should a global prConcurrentLimit of 5 stop updates if there are 10 open PRs from a limitGroup without limits? Or should the global limit be a separate implicit limitGroup instead, so we'd always get up to five package updates plus unlimited com.my.company updates?

That's what I was trying to say with this:

Whether the limitGroup should still count to the global limit of five, I'm not sure. For our use case, I would not care, but I guess things would really get messy if a config contained many limitGroups.

@rarkins
Copy link
Collaborator

rarkins commented Aug 9, 2023

I'm thinking if we can start simple and solve the majority of cases:

  • limits can be configured anywhere in packageRules
  • if more than one update exists in a branch and they have different limits, the lower one is used
  • concurrent and hourly limits are "shared"

So for example if we set a root level concurrent limit of 5, but no limit for external internal packages, then non-internal packages will never be created unless there's <5 branches, including internal.

@philmtd
Copy link

philmtd commented Aug 9, 2023

Sounds like a good solution to me! 👍🏼

You mean internal packages though, I guess:

[...] but no limit for external packages [...]

@rarkins
Copy link
Collaborator

rarkins commented Sep 15, 2023

Additional requirement from @allanlewis in #24438:

In my case, I have a monorepo with ~30 package files and I would like to allow one open PR per package file but a higher limit overall.

@rarkins rarkins added status:ready and removed status:requirements Full requirements are not yet known, so implementation should not be started labels Oct 1, 2023
@takayamaki

This comment was marked as resolved.

@ying-jeanne
Copy link

I would really want this feature since it would be useful in the case if we only want any security update for release branch but all dependencies for main branch. This is currently not possible, but if we can set request rate limit per package rule, i could eventually set the limit for release branch to 0. and leave the security update for all branches.

@rarkins
Copy link
Collaborator

rarkins commented Dec 12, 2023

Security updates should bypass all rate limiting today already

@MilesHeise
Copy link

another +1 here

we have very busy service devs at the moment who can't get to Renovate regularly to get rid of 'nuget' ones, so it keeps filling up and in order to get more front end 'npm' ones we keep having to bump up the minimum. we started at 5 and now we are at like 25 (which also means now we have a stupidly large set of nuget PRs open at once) just in order to keep getting npm PRs.

when I first was looking through the docs I assumed I had to be stupid and just somehow missing where they said how to make different pr limits because it was so obviously a thing that should exist that it didn't even occur to me it wasn't in the docs somewhere if I looked harder

@ns-ggeorgiev
Copy link
Contributor

@rarkins I forked the renovate repo, when my suggestions of honoring the priorities for creating new issues was rejected. I have implemented my idea and it is working great for my company for more than a year now.

This feature might be the right thing long (or very long term), but it seems like in does not rise up in your priority list. The implementation of honoring the priorities took me 2-3 days. It might not be perfect, but it definitely seems to have better ROI.

Maybe you could reconsider and I can contribute the change back to your repo?

@philmtd

This comment was marked as spam.

@rarkins
Copy link
Collaborator

rarkins commented Feb 15, 2024

We don't currently have any customers needing this so it is not prioritized internally. PRs are welcome, alternatively there is an experienced (independent) consultant in Austria who you could pay for implementation.

@HonkingGoose
Copy link
Collaborator

... alternatively there is an experienced (independent) consultant in Austria who you could pay for implementation.

I think @secustor is the consultant you're referring to here?

@kamok
Copy link

kamok commented Aug 26, 2024

I'm interested! @rarkins
Specifically for Remediate. Are those different teams?

@rarkins
Copy link
Collaborator

rarkins commented Aug 30, 2024

@kamok please reach out to your customer success manager at Mend, or have your colleagues reach out. Usually customers have a prioritized list of feature requests and while there's no guarantees of new features, we do our best to get to the highest priority ones

@MPV

This comment was marked as off-topic.

@pb30

This comment was marked as resolved.

@rarkins
Copy link
Collaborator

rarkins commented Nov 7, 2024

Here's how I'd like it implemented. Hopefully I haven't forgotten any thing important.

  • prHourlyLimit, prConcurrentLimit, plus the associated branch limits should be intelligently merged per-branch, using the lowest value. Important: if a branch contains multiple non-zero limits, we should log a warning that the branch contains mixed limits, so the lowest will be used. No limits on any upgrade in a branch means no limit on the branch
  • If any branch has non-zero hourly limits, we should continue as we do today calculating the limit by "calendar hour". i.e. take the current time, go back to the start of the hour, and calculate how many branches/PRs since the start of the hour (same functionality of as today). We should log a debug message saying which hour window (in UTC time) will be used for all branches to reduce chances of confusion
  • Important: if we cross hours while executing the branches, we shouldn't reset the limit. This could cause too much confusion because our branches are ordered in terms of importance. Imagine we blocked the first 10 most important branches with hourly limits but then opened the 11th one because enough time had elapsed.
  • Before executing branches, we should calculate the hourly and concurrent existing and log these values (i.e. log how many are existing, not how many left)
  • For exact branch, log if it has limits or not, and if those limits have been reached
  • Any time a branch is created, we should increment the counts we calculated at the start (hourly, concurrent) and use the incremented value to calculate. We should not fetch getPrList() each time for this purpose - we should keep the count ourselves

@rarkins rarkins added priority-2-high Bugs impacting wide number of users or very important features and removed priority-3-medium Default priority, "should be done" but isn't prioritised ahead of others labels Nov 7, 2024
@RahulGautamSingh
Copy link
Collaborator

For exact branch, log if it has limits or not, and if those limits have been reached

Default limits will always be present ie, no scenario where value won't be logged.

Important: if a branch contains multiple non-zero limits, we should log a warning that the branch contains mixed limits

This will be the case for any branch that has more than one update, because of the default value for each limit.
Should we only log this warning if any two updates in the branch have different values for atleast one of the limits?

@rarkins
Copy link
Collaborator

rarkins commented Nov 13, 2024

I think you misunderstand something. The second statement is not correct. It's entirely possible for all updates in a branch to have been configured with the same limits

@RahulGautamSingh RahulGautamSingh linked a pull request Nov 15, 2024 that will close this issue
6 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority-2-high Bugs impacting wide number of users or very important features type:feature Feature (new functionality)
Projects
None yet
Development

Successfully merging a pull request may close this issue.