You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At the moment, when a tenant is restarted, if the tenants Container / IServiceProvider is being used to satisfy multiple requests (i.e a scope is created per request) - the the act of restarting the tenant - which disposes of the tenants IServiceProvider, may cause IDisposable objects to be disposed in those active requests creating errors. This could happen if:
A service is registered into the tenant's container as a Singleton and implements IDisposable
Scoped services create new instances per request anyway so should be fine.
Transient services also have new instances so should also be fine.
Another issue is that for singleton services, if the tenant container is disposed, a new tenant container is re-built. Some requests may be using the older singleton instance from the older container, and when the new container is used to satisfy incoming requests, those requests get a different singleton instance. This means you could have two different singleton instances temporarily active at the same time during a tenant restart in a high concurrency situation.
To solve these issues I am thinking something like:
When a request comes in, and the middleware calls "CreateScope()" on the tenants container in order to get a container lifetime scope to serve the request:
Will increment some counter on the container. When the scope is then disposed of at the end of the request, the scope will decrement the counter. This counter will represent the number of requests (scopes) currently active from the tenants container.
Will need to aquire a Reader Lock in order to call CreateScope() in the first place, and get the scope which it uses to serve the request. The Reader Lock will be blocked when the tenants container is being disposed of and rebuilt via a tenant restart. This may take some time to complete, so the Reader Lock acquisition will wait for a timeout - perhaps upto 60 seconds. This would cause some requests to the tenant to be delayed during a tenant restart but not dropped.
When a "Tenant Restart" request comes in, to trigger a restart of the current tenant
Will Locate the current tenant's container, and attempt to obtain a Writer Lock on it. Obtaining the Writer Lock will cause all Reader Lock aquisition to Block.
Will monitor the containers "counter" that indicates the number of current active scopes. As all new requests are now blocked, as active requests finish processing and the scopes get disposed this number should decrease. Once it hits 0, we can safely dispose of the container, then replace it with a newly built one. At which point the Write Lock will be returned, and the Read Lock will be unblocked. As those requests then call CreateScope() they should be getting a scope from the newly built container rather than the old one.
Need to consider this bit carefully:
Will monitor the containers "counter" that indicates the number of current active scopes. As all new requests are now blocked, as active requests finish processing and the scopes get disposed this number should decrease.
In the case where the tenant has a background service running, this may also be creating scopes, and disposing scopes. This would work in exactly the same way as request middleware.
Background services should be resolved from the Tenant container - and therefore should be signalled to Stop() when the container is disposing. I beleive this should already be handled but worth checking.
When a request comes in, and the middleware calls "CreateScope(
The text was updated successfully, but these errors were encountered:
Making a quick note for future enhancements,
At the moment, when a tenant is restarted, if the tenants Container / IServiceProvider is being used to satisfy multiple requests (i.e a scope is created per request) - the the act of restarting the tenant - which disposes of the tenants IServiceProvider, may cause IDisposable objects to be disposed in those active requests creating errors. This could happen if:
Scoped services create new instances per request anyway so should be fine.
Transient services also have new instances so should also be fine.
Another issue is that for singleton services, if the tenant container is disposed, a new tenant container is re-built. Some requests may be using the older singleton instance from the older container, and when the new container is used to satisfy incoming requests, those requests get a different singleton instance. This means you could have two different singleton instances temporarily active at the same time during a tenant restart in a high concurrency situation.
To solve these issues I am thinking something like:
Need to consider this bit carefully:
In the case where the tenant has a background service running, this may also be creating scopes, and disposing scopes. This would work in exactly the same way as request middleware.
Background services should be resolved from the Tenant container - and therefore should be signalled to Stop() when the container is disposing. I beleive this should already be handled but worth checking.
The text was updated successfully, but these errors were encountered: