Throughput performace #136
Replies: 5 comments 8 replies
-
Hey @lhmscpqd - you need to share some more information about your tests. Please send logs and console trace of the gNB when you do your experiments. Also try to use UDP traffic traffic rather then TCP. There will be some improvements coming with the next release (will be out shortly) but meanwhile this might help. On a side note - I see you're using a 16 A72 cores. What kind of system is this? Can you share more info? Is it a LX2160A? |
Beta Was this translation helpful? Give feedback.
-
Hello Andre, thanks for your fast reply. Sorry about this, I'm attaching some logs of gNB (including terminal output) and iperf tests, now using UDP. Note that using UDP the throughput was much lower than using TCP (even though I use parallel streams). About the processor: yes, it's an LX2160A 16-core Arm Cortex A72. I'm using a ClearFog CX LX2. I also forgot to detail the configuration of the performative machine, here it is:
|
Beta Was this translation helpful? Give feedback.
-
But when you say "performative machine" you mean the one running the 5GC? The gNB is running on the ARM, right? It's an interesting platform because it has many cores but they aren't very powerful - it's essentially four Raspberry Pi 4s. We might have to tweak the threading model a bit to get max rate. But it should be possible. For sure from the gnb console trace we can see that the gNB isn't getting "enough" traffic from the core to even allocate all resources. |
Beta Was this translation helpful? Give feedback.
-
Hi @lhmscpqd , Looking at the config, you could try the following.
To be honest I'm not sure the bottleneck is the CPU. It seems more the transfer of samples to the USRP (RF lates/underflows). Let's try these options first, if these doesn't improve, we'll look at other options. To enable 256qam, use this option:
|
Beta Was this translation helpful? Give feedback.
-
Closing discussion as it's not relevant anymore to the original poster. |
Beta Was this translation helpful? Give feedback.
-
Issue Description
Hi, community.
After some tests using srsRAN, I was wondering how performative would be this stack executed on a high-performance machine, taking into consideration the throughput achieved on the UE (got 29.2 Mbps DL and 2.45 Mbps UL). I made some tests on a performative machine too, but the result was similar (26 Mbps DL, 10.8 Mbps UL).
Even though the setup was wired and the gNB gains were calibrated, the throughput achieved on both machines was not as good as it could have been. Did you get results similar to this one? Did you do any performance optimization besides those present in the "script/srsran_performance" script?
Thanks in advance.
Setup Details
Expected Behavior
Get some DL throughput closer to 41 Mbps (theorical limit on this configuration).
Actual Behaviour
Got a DL throughput of 29.2 Mbps.
Beta Was this translation helpful? Give feedback.
All reactions