Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sched: Support POSIX's SCHED_RR scheduling policy #1338

Draft
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

nmeum
Copy link

@nmeum nmeum commented Nov 11, 2024

Previously, pull request #1223 added support for the SCHED_FIFO but didn't implement the SCHED_RR policy. This PR attempts to follow-up on that by proposing an implementation of SCHED_RR.

Time slicing support, as mandated by SCHED_RR, is implemented though the set_realtime_time_slice(duration) API added in the aforementioned pull request. Within the scheduler, the amount of time the thread ran so far is tracked if it exceeds the set duration, the thread is preempted (if there is a runnable thread of same or higher priority). The thread's run time is reset on preemption.

This is my first OSv pull request, I added some basic tests which I can expand further. Additionally, I checked that the tests added #1223 still pass. Further testing is definitely needed, hence this is a draft PR for now.

Let me know what you think.

Previously, pull request cloudius-systems#1223 added support for the SCHED_FIFO but
didn't implement SCHED_RR. This PR attempts to follow-up on that by
proposing an implementation of SCHED_RR.

Time slicing support, as mandated by SCHED_RR, is implemented though
the `set_realtime_time_slice(duration)` API added in the aforementioned
pull request. Within the scheduler, the amount of time the thread ran
so far is tracked if it exceeds the set duration, the thread is
preempted (if there is a runnable thread of same or higher priority).
The thread's run time is reset on preemption.

Signed-off-by: Sören Tempel <[email protected]>
core/sched.cc Show resolved Hide resolved
Comment on lines +368 to +371
// p is no longer running, if it has a realtime slice reset it.
if (p->_realtime.has_slice()) {
p->_realtime.reset_slice();
}
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not entirely clear to me if the slice should be reset if the thread is no longer runnable (e.g. because of blocking I/O). POSIX does not explicitly describe when the slice should be reset.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I'm also not sure, but think this if is right. I think the idea of the time slice is to make sure that a single thread never in its priority group never runs more than 1ms (for example) without letting other threads in its group run. But if the thread blocks or yields voluntarily (I believe this if covers both cases, right?), then it gives some other thread a chance to run and it too has a chance to run for a whole time-slice, so it's only fair that this thread's time slice is reset to zero. I think.
I tried searching if anybody mentions this question, and couldn't find such a discussion.

Copy link
Collaborator

@wkozaczuk wkozaczuk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your code change looks good. I did ask some questions though.

// If the threads have the same realtime priority, then only reschedule
// if the currently executed thread has exceeded its time slice (if any).
if (t._realtime._priority == p->_realtime._priority &&
((!p->_realtime.has_slice() || p->_realtime.has_remaining()))) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So all this means we are going to keep the current thread p running should stay running if _time_slice is 0 (run 'forever' until yields or waits) OR there is still time left to run per its _time_slice, right?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[…] should stay running if _time_slice is 0 (run 'forever' until yields or waits) OR there is still time left to run per its _time_slice, right?

Yes p->_realtime.has_slice() checks if it has a time slice (i.e. _time_slice != 0), p->_realtime.has_remaining() checks if there is still time remaining on the slice (if it has one).

Note that, even if the thread has exceeded its time slice it may still be selected to run again if there is no thread with a higher priority. Hence, the priority comparison in the if condition.

enqueue(*p);
p->_realtime.reset_slice();
} else {
// POSIX requires that if a real-time thread doesn't yield but
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this means that if the current thread p _time_slice is 0 OR p still has some remaining time to run, we will call enqueue_first_equal(). Is this correct?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think it's correct. If we got here it means p was preempted. If it still has remaining time, it means it was preempted by a higher-priority realtime thread but when that higher-priority thread doesn't want to run, this thread p should continue running and continue its current time slice. The documentation says: "A SCHED_RR thread that has been preempted by a higher priority thread and subsequently resumes execution as a running thread will complete the unexpired portion of its round-robin time quantum.". It should be the first one in its priority group to run (and therefore enqueue_first_equal()) just like when no time slices existed.

core/sched.cc Show resolved Hide resolved
@wkozaczuk
Copy link
Collaborator

@nyh What do you think?

Copy link
Contributor

@nyh nyh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks very nice to me, and appears correct although I'm a bit worried that I'm rusty in this code and might have missed something. I only left a few minor comments/requests.

I just realized that we never actually implemented the POSIX API for these features ;-) I we think we have such patches in #386 and maybe it will be nice to revive them.

core/sched.cc Outdated
@@ -276,6 +276,10 @@ void cpu::reschedule_from_interrupt(bool called_from_yield,
}
thread* p = thread::current();

if (p->_realtime.has_slice()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that today it is possible to set the realtime time slice without setting realtime priority yet, and I think in this case you don't want to increment _run_time. So maybe you also need to check if realtime.priority is > 0?

Copy link
Author

@nmeum nmeum Dec 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that today it is possible to set the realtime time slice without setting realtime priority yet […]

Do we want to support setting a time slice without providing a realtime priority? If so I can also adjust the code accordingly.

So maybe you also need to check if realtime.priority is > 0?

Added this for now in 8db3444

core/sched.cc Outdated Show resolved Hide resolved
// rather is preempted by a higher-priority thread, it be
// reinserted into the runqueue first, not last, among its equals.
enqueue_first_equal(*p);
if (p->_realtime.has_slice() && !p->_realtime.has_remaining()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, maybe we need to check if realtime.priority>0 because maybe has_slice() just records some old setting and it's not in use now?

Or, maybe, for simplicity, we should just ensure that if the real-time priority is ever set to 0, then slice is also set to 0?

enqueue(*p);
p->_realtime.reset_slice();
} else {
// POSIX requires that if a real-time thread doesn't yield but
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think it's correct. If we got here it means p was preempted. If it still has remaining time, it means it was preempted by a higher-priority realtime thread but when that higher-priority thread doesn't want to run, this thread p should continue running and continue its current time slice. The documentation says: "A SCHED_RR thread that has been preempted by a higher priority thread and subsequently resumes execution as a running thread will complete the unexpired portion of its round-robin time quantum.". It should be the first one in its priority group to run (and therefore enqueue_first_equal()) just like when no time slices existed.

Comment on lines +368 to +371
// p is no longer running, if it has a realtime slice reset it.
if (p->_realtime.has_slice()) {
p->_realtime.reset_slice();
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I'm also not sure, but think this if is right. I think the idea of the time slice is to make sure that a single thread never in its priority group never runs more than 1ms (for example) without letting other threads in its group run. But if the thread blocks or yields voluntarily (I believe this if covers both cases, right?), then it gives some other thread a chance to run and it too has a chance to run for a whole time-slice, so it's only fair that this thread's time slice is reset to zero. I think.
I tried searching if anybody mentions this question, and couldn't find such a discussion.

core/sched.cc Outdated Show resolved Hide resolved
core/sched.cc Show resolved Hide resolved
tests/tst-thread-realtime.cc Outdated Show resolved Hide resolved
long prev_switches = -1;
for (int i = 0; i < num_threads; i++) {
long switches = threads[i]->stat_switches.get();
if (prev_switches != -1 && prev_switches != switches) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Am I correct that you want all threads to have exactly the same number of context switches? How can we be confident of this - can't one happen to have one more than the others because of some inaccuracy or something?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Am I correct that you want all threads to have exactly the same number of context switches?

Yes. My thinking was: If we assign each thread a time slice of size N and then wait N * NUM_THREADS * EXPECTED_SWITCHES time units then (on a single core machine under a realtime scheduling policy), we would expect each thread to be preempted after N seconds. As such, each thread should have EXPECTED_SWITCHES amount of context switches.

Maybe I am missing something obvious, but I haven't seen this test fail yet. However, since this is not a hard-realtime operating systems I assume we could see delays here and there in the scheduler? We can also make the comparison fuzzy, allowing the amount of expected context switches to be off by 1 or 2?

// Since both threads are pinned to the CPU and the higher priority
// thread is always runnable, the lower priority thread should starve.
bool ok = high_prio->thread_clock().count() > 0 &&
low_prio->thread_clock().count() == 0;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit worried that it's theoretically possible that although you start()ed the high prio thread first and only then the low_prio one, maybe the low prio one got to run for a microsecond before the high prio one so its thread clock will not be exactly zero. But maybe in practice it doesn't happen...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test can also have the opposite problem. If I understand correctly your TIME_SLICE is absolutely tiny, 0.1 milliseconds, and after starting high_prio and low_prio you only sleep 3 times that, i.e., 0.3 milliseconds, so it is theoretically possible that the test will pass even without any realtime priorities or anything, just because we let the first-ran highprio thread run for 0.3 milliseconds straight.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand correctly your TIME_SLICE is absolutely tiny, 0.1 milliseconds

I increased it further in 3c27113

maybe the low prio one got to run for a microsecond before the high prio one

Note: They are both pinned to same CPU so the higher prio one should also be first in the CPU runqueue and should always be runnable so I am not sure under which scenario the lower prio one would get the CPU. However, I believe you have more expertise with the scheduling code. We can also discard this test case.

What test cases would you like to see instead for SCHED_RR?

@nmeum
Copy link
Author

nmeum commented Dec 4, 2024

Thanks a lot to both of you for the detailed comments and feedback! I made some minor changes and left further comments above. I think the main two things that remain to be sorted out are:

  1. Do we want want to support setting a realtime slice without a priority? If not, there are a few places where we should check if priority == 0 as pointed out above.
  2. I believe the tests should be expanded a bit, if you have ideas regarding useful test cases for SCHED_RR let me know. Maybe there are also tests for this scheduling policy in some POSIXy open source operating system of choice that we can use as a source of inspiration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants