Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

.NET 8 performance issue with IBMMQDotnetClient (v9.3 & v9.4) #109729

Open
VeeraraghavanSrinivasan opened this issue Nov 12, 2024 · 2 comments
Open
Labels
Milestone

Comments

@VeeraraghavanSrinivasan
<style> </style>

 

### Description

 

Application logic:

We have a simple application that picks up the messages from Azure Service bus and commits the messages to IBM WebSphere MQ (IBMMQDotnetClient v9.3 & v9.4).

The code has been running successfully in our production environment for more than a year without any issues. We recently upgraded the target framework to .NET 8 and have seen a significant slowness which in turn made us to roll back the solution to use .NET 6.

When the below code is accessed in parallel (through 100 individual threads say using a function app with ASB trigger) or use a simple ‘Parallel.For’ threads in a console app, we are able to replicate the problem. We did open case with Microsoft and IBM team who did point out the fact that this is more for the .NET product group.

 

Sample code :

{

 static void Main(string[] args)

 {

               static string qm = "***"; static string channel = "***";

               static string host = "***"; static int port = ***; static string sslCipherSpec = "*** ";

        MQQueueManager qMgr = createQMObject();

               for (int i = 1; i <= 10; i++)

                              {

                                             var watch = System.Diagnostics.Stopwatch.StartNew();

                                             // Conenct to Queue , establish SSL connection

                                             MQQueue mqQueue = qMgr.AccessQueue(queue, MQC.MQOO_OUTPUT);

                                             watch.Stop();

                                             Console.WriteLine("Time taken to access Queue:" + watch.ElapsedMilliseconds + " ms");

                              }

               Parallel.For(0, 20, i =>

                              {

                                             var watch = System.Diagnostics.Stopwatch.StartNew();

                                             MQQueue mqQueue = qMgr.AccessQueue(queue, MQC.MQOO_OUTPUT);

                                             watch.Stop();

                                             Console.WriteLine("Time taken to access Queue:" + watch.ElapsedMilliseconds + " ms");

                              }

}

 

 

  public static MQQueueManager createQMObject()

  {

      Hashtable connectionProp = new Hashtable()

                              {{MQC.TRANSPORT_PROPERTY, MQC.TRANSPORT_MQSERIES_MANAGED},{MQC.HOST_NAME_PROPERTY, host },

                              {MQC.CHANNEL_PROPERTY, channel},{MQC.PORT_PROPERTY, port },{MQC.SSL_CIPHER_SPEC_PROPERTY, sslCipherSpec }};

      return new MQQueueManager(qm, connectionProp);

  }

}

 

### Configuration

 

(If you are posting Benchmark.NET results, this info will be included.)

* Which version of .NET is the code running on?

The working version is .NET 6 and if the code is run against .NET 8, it wont work

* What OS version, and what distro if applicable?

Windows platform

* What is the architecture (x64, x86, ARM, ARM64)?

X64

 

### Regression?

 

* .NET 6 with IBMMQDotnetClient v9.3 works perfectly fine

* .NET 8 with IBMMQDotnetClient v9.3 or v9.4 doesn’t give the expected performance results. It doesn’t even process like 20 messages in parallel quickly

 

### Data

.NET 8 vs .NET 6 performance comparison for the above sample code

 

.NET 8 .NET 6
Establishing connection Establishing connection
Time taken to access Queue:17 ms Time taken to access Queue:43 ms
Time taken to access Queue:6 ms Time taken to access Queue:7 ms
Time taken to access Queue:6 ms Time taken to access Queue:3 ms
Time taken to access Queue:3 ms Time taken to access Queue:8 ms
Time taken to access Queue:5 ms Time taken to access Queue:3 ms
Time taken to access Queue:5 ms Time taken to access Queue:3 ms
Time taken to access Queue:10 ms Time taken to access Queue:3 ms
Time taken to access Queue:3 ms Time taken to access Queue:3 ms
Time taken to access Queue:10 ms Time taken to access Queue:3 ms
Time taken to access Queue:4 ms Time taken to access Queue:9 ms
switching to parallel switching to parallel
Time taken to access Queue:87 ms Time taken to access Queue:110 ms
Time taken to access Queue:134 ms Time taken to access Queue:35 ms
Time taken to access Queue:886 ms Time taken to access Queue:202 ms
Time taken to access Queue:1762 ms Time taken to access Queue:272 ms
Time taken to access Queue:2823 ms Time taken to access Queue:351 ms
Time taken to access Queue:3681 ms Time taken to access Queue:410 ms
Time taken to access Queue:4689 ms Time taken to access Queue:422 ms
Time taken to access Queue:5865 ms Time taken to access Queue:50 ms
Time taken to access Queue:6763 ms Time taken to access Queue:339 ms
Time taken to access Queue:7722 ms Time taken to access Queue:8 ms
Time taken to access Queue:7971 ms Time taken to access Queue:411 ms
Time taken to access Queue:3041 ms Time taken to access Queue:18 ms
Time taken to access Queue:2060 ms Time taken to access Queue:20 ms
Time taken to access Queue:7831 ms Time taken to access Queue:19 ms
Time taken to access Queue:4100 ms Time taken to access Queue:64 ms
Time taken to access Queue:6135 ms Time taken to access Queue:48 ms
Time taken to access Queue:1103 ms Time taken to access Queue:33 ms
Time taken to access Queue:7163 ms Time taken to access Queue:180 ms
Time taken to access Queue:126 ms Time taken to access Queue:153 ms
Time taken to access Queue:5172 ms Time taken to access Queue:36 ms

 

 

### Analysis

  • The issue is noticed only when we try & access the piece of code in parallel or use a asynchronous parallel logic
  • The problem seems to be with the .NET version 8 and the IBM MQ library which makes the access queue line super slow. Both the IBM and MS team have pointed to the fact that this seems to be an issue with .NET 8 itself (Case reference : 2410230030008938)  
[MQClass.txt](https://github.com/user-attachments/files/17719151/MQClass.txt)
@VeeraraghavanSrinivasan VeeraraghavanSrinivasan added the tenet-performance Performance related issue label Nov 12, 2024
@dotnet-issue-labeler dotnet-issue-labeler bot added the needs-area-label An area label is needed to ensure this gets routed to the appropriate area owners label Nov 12, 2024
@dotnet-policy-service dotnet-policy-service bot added the untriaged New issue has not been triaged by the area owner label Nov 12, 2024
Copy link
Contributor

Tagging subscribers to this area: @mangod9
See info in area-owners.md if you want to be subscribed.

@mangod9 mangod9 removed the untriaged New issue has not been triaged by the area owner label Nov 13, 2024
@mangod9 mangod9 added this to the 10.0.0 milestone Nov 13, 2024
@vcsjones vcsjones removed the needs-area-label An area label is needed to ensure this gets routed to the appropriate area owners label Nov 20, 2024
Copy link
Contributor

Tagging subscribers to this area: @dotnet/area-system-threading-tasks
See info in area-owners.md if you want to be subscribed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants