-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to: durable queue subscriber #95
Comments
I'm not aware of any specific issues - can you provide some test code that reproduces this? If you don't dispose, you should close the subscriber when you are finished with it to recover resources and let the streaming server clean up. |
Thank you @ColinSullivan1 for the answer. I use this nats streaming server. `using System; namespace STAN.Example.Publish
}` Here is my Consumer code: `using System; namespace STAN.Example.Subscribe
}` Here the steps to reproduce:
I noticed if I don't call the Consumer's dispose, it works. I hope to have been clear this time. |
I think there might be some logic around the StanSubscriptionOptions.LeaveOpen property - I was trying to simulate a 'durable subscription' with a console app that would make a new connection and put in a using and loop so it would "disconnect" every few moments, but what I didn't realize is that my using() closure was calling dispose and because I didn't set leave open on that subscription, I was not properly simulating a disconnect, the server received an unsubscribe. you might want to verify your LeaveOpen options |
I'm trying to use a durable queue subscriber. I just download the example and modified that.
In the StanSubscriber, I set the "DurableName" in the "StanSubscriptionOptions" and passed the "qGroup" parameter in the "Subcribe" method.
It doesn't work.
After many attempts, I removed the using that dispose the "IStanSubscription" and the durable feature worked.
I wonder to know if this is the way or not.
And if it is the way: if I never dispose a "IStanSubscription", will I get troubles?
The text was updated successfully, but these errors were encountered: