-
Notifications
You must be signed in to change notification settings - Fork 814
Enable TypeSubsumptionCache for IDE use #18499
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
❗ Release notes requiredCaution No release notes found for the changed paths (see table below). Please make sure to add an entry with an informative description of the change as well as link to this pull request, issue and language suggestion if applicable. Release notes for this repository are based on Keep A Changelog format. The following format is recommended for this repository:
If you believe that release notes are not necessary for this PR, please add NO_RELEASE_NOTES label to the pull request. You can open this PR in browser to add release notes: open in github.dev
|
@@ -106,15 +133,13 @@ type [<Struct; NoComparison; CustomEquality>] TTypeCacheKey = | |||
type ImportMap(g: TcGlobals, assemblyLoader: AssemblyLoader) = | |||
let typeRefToTyconRefCache = ConcurrentDictionary<ILTypeRef, TyconRef>() | |||
|
|||
let typeSubsumptionCache = ConcurrentDictionary<TTypeCacheKey, bool>(System.Environment.ProcessorCount, 1024) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In incremental mode ImportMap
get created quite a lot, so this had to be attached somewhere else.
open System.Diagnostics.Metrics | ||
|
||
[<Struct; RequireQualifiedAccess; NoComparison>] | ||
type EvictionMethod = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed Blocking
eviction method for now. It's another thing that would rot, because it's unused.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree
member _.Remove(_) = () | ||
} | ||
|
||
type ICacheEvents = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are events needed or should we just use Metrics
counters directly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was added for observability.
If we can use metrics for it, it serves better.
(also less worries about loose GC handles IMO)
use meterProvider = | ||
OpenTelemetry.Sdk.CreateMeterProviderBuilder() | ||
.AddMeter(nameof FSharp.Compiler.CacheInstrumentation) | ||
.AddMeter("System.Runtime") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This "System.Runtime" meter works only for net9.0 but is actually quite awesome. GC, memory, and lots of other metrics can be viewed in Prometheus in real time while the tests run. I use a simple docker setup like this
// FSI-LINKAGE-POINT: unsited init | ||
do FSharp.Interactive.Hooks.fsiConsoleWindowPackageCtorUnsited (this :> Package) | ||
|
||
// Uncomment to view cache metrics in the output window | ||
// do Logging.FSharpServiceTelemetry.logCacheMetricsToOutput () |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cache metrics viewable directly in VS output window.
let private overrideVariable = "FSHARP_CACHE_OVERRIDE" | ||
|
||
/// Use for testing purposes to reduce memory consumption in testhost and its subprocesses. | ||
let OverrideMaxCapacityForTesting () = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately, the settings that work best irl are not great for the CI. I'm still not sure what's going on, so this is an attempt.
src/Compiler/Utilities/Caches.fs
Outdated
let options = | ||
match Environment.GetEnvironmentVariable(overrideVariable) with | ||
| null -> options | ||
| _ -> { options with MaximumCapacity = 1024 } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we have got:
CacheOptions.Default.MaximumCapacity => That one is never used ... ?
createTypeSubsumptionCache depending on the options.
and this piece overrides it agnostic of the options (I guess it assumes that CI run is always one-off and isolated, and works with a very small cache intentionally).
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes the idea is to override only the MaximumCapacity when run in testhost process. We call OverrideMaxCapacityForTesting ()
at the start of xUnit test run. The env var should propagate to app domains on net472, and even subprocesses that we start a lot.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok that makes sense.
I agree to rather have small cache than no-cache at all for CI, so that we aspire to keep a higher level of dogfooding
deal with memory restrictions in CI, dispose caches
@@ -108,7 +108,9 @@ let createTypeSubsumptionCache (g: TcGlobals) = | |||
MaximumCapacity = 4 * 32768 } | |||
Cache.Create<TTypeCacheKey, bool>(options) | |||
|
|||
let typeSubsumptionCaches = ConditionalWeakTable<TcGlobals, Cache<TTypeCacheKey, bool>>() | |||
let typeSubsumptionCaches = Cache.Create<TcGlobals, Cache<TTypeCacheKey, bool>>({ CacheOptions.Default with MaximumCapacity = 16 }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Each cache held here roughly corresponds to one assembly being built / one incremental builder. My understanding is that there is not much turnover here, unless something on the project level changes.
In a test run the story is different, hundreds can be created and disposed.
Now, that kind of asks for LFU.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't LRU be also fitting here?
If a project does not need a cache (= likely is not getting touched for many minutes...) for a longer time, just get rid of it?
Performance of this is clearly off. I think I fixed the excessive memory use in CI (notwithstanding the Linux leg memory warnings, but they were always there ), but still, it runs much slower than main. |
Description
This builds upon Vlad's work from #18190 and #17668, extending the fix for #17501 into incremental compilation / typecheck in IDE editor.
This is resubmitted and cleaned up #18468.
TODO:
tests, comments, write-up, discussion
Checklist
Test cases added
Performance benchmarks added in case of performance changes
Release notes entry updated: