You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I suppose SimpleGit loads all commits in memory when you use logCommits.all that could be a problem. I do not expect it to be too slow or too big for very big repositories with a lot of commits but at least we should know the limit. We could improve the process (if needed) with:
Not loading all commits in memory and processing and filtering them on the fly.
Maybe, if we modify the nextJob() or we even add a new method to get a given job getJob(jobId: JobId) we might need to re-process the git log to find the information for the missing job if we do not have it in memory. In this case, the CommitMessageLog attribute of the queue would be only a cache of the full queue information.
I'm going to add this issue to the roadmap for a future release. This issue is not intended to be the solution for the potential bottleneck problem. If we decide to do something we can discuss the solution and create a new issue for the implementation.
I think the method could be a base idea for a benchmarking workflow mentioned here. But we can use a fixed test project or create it. Using an external fixture project would make the test faster.
The queue loads the messages from the
git log
output:I suppose
SimpleGit
loads all commits in memory when you uselogCommits.all
that could be a problem. I do not expect it to be too slow or too big for very big repositories with a lot of commits but at least we should know the limit. We could improve the process (if needed) with:nextJob()
or we even add a new method to get a given jobgetJob(jobId: JobId)
we might need to re-process thegit log
to find the information for the missing job if we do not have it in memory. In this case, theCommitMessageLog
attribute of the queue would be only a cache of the full queue information.I'm going to add this issue to the roadmap for a future release. This issue is not intended to be the solution for the potential bottleneck problem. If we decide to do something we can discuss the solution and create a new issue for the implementation.
@da2ce7 @yeraydavidrodriguez
The text was updated successfully, but these errors were encountered: