-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sending random values to PMU Connection Tester Connectivity and Reporting Rate Issues #20
Comments
this is going to be a big answer. and you might want to start drawing this out because there are threads and process spawning. I don't use as much the PMU module, as I use the PDC module, but for what I've seen I assume the following that if it is computer related, it is because of a sleep and if you disable the randomint and put fixed values it increases the throughput. (try checking this one out and try commenting out line 17 of the code it should increase the throughput) So what is a PMU? From the code perspective a PMU is at the same time a PDC and PMU, because it has to handle the incomming traffic and the outgoing traffic. In this example you set a 30fps value, I've made tests on the past and managed to get upwards of 480fps, so why is this happening? So in the pmu object creation you create itself, then set the debug level, the create the CFG2 set it and set its header. Finally call it's run method. Lines 193 to 206 in 66e6c49
The Aceptor Thread creates a buffer and a Process that calls on the pdc_handler that deals with the parsing and sending of data . Lines 209 to 227 in 66e6c49
this method (now a separate process) has a extensive parsing and then on line 340 has the following lines: Lines 340 to 350 in 66e6c49
specifically on line 347 you see a sleep(delay) from which is instantiated at Lines 258 to 262 in 66e6c49
okay. so you have a fixed sleep of in this case 1/30 seconds!, but this value does not consider the time the code took to execute! I assume this is the problem, but testing is necessary. If you take out the sleep you're going to send everything you've got on the buffer as fast as possible, so not a good possibility, the ideal would be to consider the time the code took to execute and subtract from the sleep total. But this has its problems. A better alternative might be to use time.monotonic_ns() and calculate time by this. If you're interested i can write more about this possibility. going back to randomPMU.py: I've made some tests with the PMU on linux: but on Windows, I did find the same problem of maximum 20fps. I'd suggest either change to linux or look into that sleep. another alternative would be pypy, but not sure if it works on windows. If it is obligatory windows then look at that delay, if not give linux a try. I'm sorry if some ideas got disconnected, i've written this in different times. |
Hey everyone, Regarding the low reporting rate issue, you should check We have also considered introducing some kind of |
My system administrator increased the CPU resource in the windows Virtual Machine I was running it on. I have better frame rates now. Still the difference increases for higher frame rates. For example, for 10 fps its ~9.98, for 25 fps its ~24.23, for 50 fps ~48, for 100 fps ~95. I will try with sleep delay and mention my findings. |
Hi Yuri,
But I get 0 ns or 16000000ns. I am not sure how to isolate the delay from the sleep and the inherent delay of the code execution time. |
Ok,
This is a test case scenario for sending data at precisely 1/30s |
Hi Yuri,
|
I tried this, but obviously the if loop is not entered
|
That was an example didn't think you'd actually use it, as plug and play. I have no idea which if it is obvious it wouldn't enter, I'd suggest that you'd create a fluxogram or some activity diagrams to check the code out. i see now that you make some averages with 1000/data_rate so your datarate would be minimum of 10fps and maximum of 200fps. phasor_list = [] could be changed to phasor_list.clear() to have the same effect, but actually if you are using a numpy array it would be better for you to pre-allocate the data as you already know its size by the datarate, with some zeros and then with the iterator determine the moment to make averages. But this is for some time improvements, are you having troubles with time? Also I'm having some trouble understanding where lies the issue right now. Again i suggest to you point out exactly which and were is the problem. I glanced on this: phi_B = phi_A+(math.pi) * 2 / 3, which algorithm are you using to determine the phase A? Now i see that you're using the same magnitude of phase A to phases B and C only with +-2pi/3, so essentially this is still a monophase PMU, you could theoretically send only one phasor of tension if you only have one by changing the config frame. Also FRACSEC_LIST and TEMPO could be changed only when there is a change on the DataRate. |
Hi, I was freaking out because I tried several methods but had some delay creeping in. Upon investigation I found out that the socket data has some sort of delay. I have no clue why. |
So I don't have enough reputation to add a comment in SO and it isn't a fully fledged answer, so i suggest you check out the package arrival at the NIC using Wireshark to test if the packages aren't arriving all at 16ms intervals |
Thanks Yuri, yes i have checked with WireShark and the packets are 1ms interval. I also printed monotonic_ns() values with each data packet, and learnt that 14-16 packets are sent at once and then after 14-16ms delay the next 14-16 are sent. I have no idea why the socket.recv() is behaving this way. The buffer is 32bytes so why is it holding 15*32bytes? I feel so out of my depth here. Any guidance would be hugely helpful for me . Thanks. I still have no reply in SO. |
I see that you've updated in SO, and got some comments, I'm not able to replicate your issue in my ubuntu, I've tried your code you've posted in SO with only minor changes as follows:
and as sender:
this configuration executed both at the same time on two terminals provided the 1ms recv datarate. I'm sorry i can't help more, it may be windows. I'd suggest trying this combination of codes in your computer just to see if the data arrives at the correct time in localhost (one in each terminal). But it might not still work. Really weird. Sorry i can't be of more help. But if does work then the plot thickens. Anyway i think SO is the best place to discuss |
Hi,
I am new to GitHub and this field as well. In case I am doing or asking anything stupid, please guide me.
I am running the RandomPMU code but when I connect with PMU Connection tester, I get the following error:
_C:\Users\pdas\PycharmProjects\test32\venv\Scripts\python.exe "C:/Users/pdas/Documents/vPMU progress/pypmu/examples/randomPMU.py"
2020-06-16 22:34:13,200 INFO [1410] - PMU configuration changed.
2020-06-16 22:34:13,200 INFO [1410] - PMU header changed.
2020-06-16 22:34:13,204 INFO [1410] - Waiting for connection on 127.0.0.1:1410
2020-06-16 22:34:31,896 INFO [1410] - Waiting for connection on 127.0.0.1:1410
2020-06-16 22:34:32,800 INFO [1410] - Connection from 127.0.0.1:56705
2020-06-16 22:34:32,801 WARNING [1410] - **Message not received completely <- (127.0.0.1:56705)
Then if I send "Enable Real-time Data" from the PMU connection tester GUI, I can see the values being sent
2020-06-16 22:38:23,428 DEBUG [1410] - Message sent at [1592339903.428882] -> (127.0.0.1:56705)
2020-06-16 22:38:23,521 DEBUG [1410] - Message sent at [1592339903.521880] -> (127.0.0.1:56705)
2020-06-16 22:38:23,630 DEBUG [1410] - Message sent at [1592339903.630881] -> (127.0.0.1:56705)
2020-06-16 22:38:23,680 DEBUG [1410] - Message sent at [1592339903.680881] -> (127.0.0.1:56705)
2020-06-16 22:38:23,784 DEBUG [1410] - Message sent at [1592339903.784881] -> (127.0.0.1:56705)
But I cannot see the values in the plot or the values in the PMU connection tester GUI. It says "Awaiting Configuration Frame".
Furthermore, I have noticed that the Synchrophasor reporting rate despite being set to 30 frames per second, is varying widely. How that can be tackled ?
The text was updated successfully, but these errors were encountered: