Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Product Pull Request] fix: increase grading rounding precision to avoid incorrect grades #238

Closed
7 tasks
jmakowski1123 opened this issue Feb 23, 2023 · 40 comments
Assignees
Labels
product review complete PR has gone through product review

Comments

@jmakowski1123
Copy link

jmakowski1123 commented Feb 23, 2023

For Contributing Author:

This is the Primary Product Ticket for the following community contribution: increase grading rounding precision to avoid incorrect grades

Checklist prior to undergoing Product Review:

The following information is required in order for Product Managers to be able to review your pull request:

  • Explanation of the problem being solved
  • Description of how users will be impacted, and which users will be impacted
  • Screenshots or video showing the functionality or fix, before and after
  • Reproduction steps and/or testing steps

Only if necessary:

  • If necessary, links to corresponding configuration changes
  • If necessary, links to corresponding enablement changes, particularly waffle/toggle status details

Related PRs

For Product Manager doing the review:

What criteria should be analyzed from Product to approve a PR?

  • The problem being solved by the feature or fix is clear.
  • There is clarity on how the change or fix will impact the end user.
  • It is clear that the change will not negatively impact users or other areas of the platform.
  • The change is implemented comprehensively.
  • Any changes to UI use the current, standard Paragon Design System: https://paragon-openedx.netlify.app/
@jmakowski1123 jmakowski1123 changed the title feat: increase grading rounding precision [Product Pull Request] feat: increase grading rounding precision Feb 23, 2023
@github-actions
Copy link

Thanks for your submission, @openedx/open-edx-project-managers will review shortly.

@jmakowski1123 jmakowski1123 moved this to Product Pull Requests in Open edX Roadmap Feb 23, 2023
@jmakowski1123
Copy link
Author

Information from the original PR

Enabling the rounding in #16837 has been causing noticeable (up to 1 percentage point) differences between non-rounded subsection grades and a total grade for a course. This increases the grade precision to reduce the negative implications of double rounding.

Jira

OSPR-5819

Sandbox

https://pr27788.sandbox.opencraft.hosting/
Direct link to the progress page (you can log in as staff).

Testing instructions

  1. Import this course in Studio.
  2. Complete two units from there.
  3. Go to the Progress page and see that the score without this PR is 35%. After this change, it should be 34%.

Explanation

This course contains two subsections graded as Homework (weight: 75%):

  1. In the first subsection, users can get 2 out of 3 points (66.67%).
  2. In the second one, users can get 1 out of 4 points (25%).

In the current approach, the grades of subsections are rounded. Therefore, in this case, we're getting (67% + 25%) / 2 / 4 * 3 (/ 4 * 3 is because of the weight (75%)), which gives us 34.5. This is rounded up to 35%.

This PR changes these calculations to (66.67% + 25%) / 2 / 4 * 3, which returns 34.37625, which is then rounded down to 34%.

Deadline

None.

Reviewers

Other information

Private-ref: BB-4210

@jmakowski1123
Copy link
Author

Additional information from the author:

Hi @ProductRyan, thank you for checking. That's correct - the subsection grades calculated by the platform are inaccurate because of the double rounding. Therefore, we want to increase the rounding precision to produce correct results.

@jmakowski1123
Copy link
Author

jmakowski1123 commented Feb 23, 2023

Pending edX platform wide review as of 2/23/23
@ProductRyan When that platform review is complete, please resume that thread in this ticket. If there are platform-wide specs that need to be taken into consideration, we can widen the scope of the initiative and consider for additional funding.

@Agrendalath
Copy link
Member

@jmakowski1123, just checking on the status of this. Is there anything we can do to help moving this forward?

@mphilbrick211
Copy link

@jmakowski1123, just checking on the status of this. Is there anything we can do to help moving this forward?

CC: @ProductRyan @Daniel-hershel

@ProductRyan
Copy link

@Agrendalath sorry for the long wait here, but we want to make sure we nail this cross-platform once and for all. The Aurora team is starting to open up some capacity and @Daniel-hershel is working to build out a roadmap now that he's been on board for a few months.

I've given Daniel all the details and he's working to incorporate it into his plans.

@ghassanmas
Copy link
Member

ghassanmas commented Apr 5, 2023

MY 2Cent,

This change can probably lead to performance issue, in a nutshell to be comre accurate you need more compute time, thus more cost more relevant to high scale

Below is a script that it shows it would add around 30% time when grades where 100K in a row.

Other un related issue is what would happen if instructor run compute grade by mistake of a certfication that has been already issued... this probably might not be related depening on the platform certifcation policy.

Lastly the pattern it seems to get something in which is controversial, is just to wrap it around a feature flag, which might not be alway the prefect solution but it be comers more releavnt path if the end goal in the frist place is just to get it to be in as quickly as possible

from datetime import datetime
from numpy import around


def do_100000_round(decimals):
    for i in range(100000):
        around(34.37625 / 100 ,decimals)


# without decimal
t_befre_2 = datetime.now()
do_100000_round(decimals=2)
t_after_2 = datetime.now()

t_befre_4 = datetime.now()
do_100000_round(decimals=4)
t_after_4 = datetime.now()
result_2 = t_after_2 - t_befre_2
result_4 = t_after_4 - t_befre_4
print(f"result for 2 round {result_2 }\n")
print(f"result for 4 round {result_4 }\n")
print(f"Increase of run time { (1 -  around(  result_4 / result_2, decimals=4) ) * 100} % ")

@Agrendalath
Copy link
Member

Agrendalath commented Apr 5, 2023

@ghassanmas, I ran this locally, and the result was 10.51%. I also tried this with 100000 iterations, and the result was 0.99%. For 10000000 iterations, the result was 0.09%.
The percentage you provided means a change of 0.03477s. For a bigger scale (10M rows), the difference is even smaller (0.0249s). Are you sure this is relevant to high scale?

Other un related issue is what would happen if instructor run compute grade by mistake of a certfication that has been already issued... this probably might not be related depening on the platform certifcation policy.

Yes, I already mentioned this in the PR:

This change will not affect existing persistent grades as long as they are not re-generated. Otherwise, this can cause up to a 1 percentage point difference in the results (on a rare occasion, like the one described in the PR).

The current grades generated by Open edX are inaccurate, though. It could be better to call this a fix instead of a feat.

if the end goal in the frist place is just to get it to be in as quickly as possible

Well, this PR (openedx/edx-platform#27788) has been in product review since May 31, 2021...

@ghassanmas
Copy link
Member

@ghassanmas, I ran this locally, and the result was 10.51%. I also tried this with 100000 iterations, and the result was 0.99%. For 10000000 iterations, the result was 0.09%.
The percentage you provided means a change of 0.03477s. For a bigger scale (10M rows), the difference is even smaller (0.0249s). Are you sure this is relevant to high scale?

I need to give a another look, it also definilty would vary from machine to machine, intel vs ARM...etc, it was just one guess.

Yes, I already mentioned this in the PR:

Yes It was suffienct, I didn't meant to remention it, but rather there might be probably cases where an insturctor need to consider what those in case: where a student had a passing grade and then after this changes they get fail or vis-versa, I speculate there might be buecray dpeending on the program, org author..etc of the a course. Again I am just speculating

@mphilbrick211
Copy link

Hi @Daniel-hershel! Just checking to see if there's any update on this? Thanks!

@Daniel-hershel
Copy link

@mphilbrick211 hi! So we definitely recognize the need to address grade rounding in some way, so appreciate this work. For this (or any rounding solution) to work it would need to be implemented holistically system wide, which would include places like:

  • learner experience
  • gradebook
  • data back ends
  • credentialing
  • etc.

So for this solution to be accepted from a Product perspective it would have to represent a holistic implementation. The issues with rounding is also on my radar as a potential place the Aurora team will make an investment, and I can keep you posted on those developments as well.

@mphilbrick211
Copy link

Thanks, @Daniel-hershel!

@mphilbrick211
Copy link

Hi @Daniel-hershel - following-up to see if there's any update here?

@jmakowski1123
Copy link
Author

@mphilbrick211 Let's move this PR to draft mode until we can find an owner to drive product specs for a platform-wide solution to rounding issues in grading.

@jmakowski1123 jmakowski1123 moved this to Needs to be re-scoped in Product Review Tracking Oct 19, 2023
@Agrendalath Agrendalath changed the title [Product Pull Request] feat: increase grading rounding precision [Product Pull Request] fix: increase grading rounding precision to avoid incorrect grades Nov 8, 2023
@mphilbrick211 mphilbrick211 moved this from Roadmap Feature Tickets (Product) to Product Review in Contributions Dec 11, 2023
@itsjeyd
Copy link

itsjeyd commented Jul 12, 2024

Hi @jmakowski1123, do we know who will be leading the product review process as coordinator here?

@itsjeyd
Copy link

itsjeyd commented Jul 18, 2024

CC @ali-hugo ⬆️

@ali-hugo
Copy link

ali-hugo commented Jul 22, 2024

@itsjeyd I don't think this update would require more than one reviewer (which means we wouldn't really need a coordinator). However, I would like to check this in the next Core Product meeting on 30 July.

I'd be happy to be a product reviewer for this. @Agrendalath Is there a sandbox or similar where I can see the change in action?

@github-project-automation github-project-automation bot moved this from [Prod Proposals] On Hold to Abandoned in Open edX Roadmap Jul 22, 2024
@github-project-automation github-project-automation bot moved this from Roadmap Feature Tickets (Product) to Done in Contributions Jul 22, 2024
@itsjeyd
Copy link

itsjeyd commented Jul 25, 2024

Sounds good, thanks @ali-hugo!

And just to understand better: I see you already marked this ticket as done. Since the July 30 meeting hasn't happened yet, what does that mean for the status of the product review process?

CC @Agrendalath

@ali-hugo
Copy link

ali-hugo commented Jul 25, 2024

I see you already marked this ticket as done.

@itsjeyd Sorry, I have no idea how I did that - it was a mistake! I have reopened the ticket.

@ali-hugo ali-hugo reopened this Jul 25, 2024
@Agrendalath
Copy link
Member

@ali-hugo,

Is there a sandbox or similar where I can see the change in action?

There used to be a sandbox for this PR, but it doesn't seem to be working anymore. @gabor-boros, do you know if we have a simple way to prepare one with openedx/edx-platform#27788 and openedx/frontend-app-learning#1397?

In any case, I described all impacted platform areas in this comment.

@ali-hugo
Copy link

@Agrendalath @gabor-boros

In any case, I described all impacted platform areas in #238 (comment).

In that case, don't worry about the sandbox. I should have everything I need in that comment. Thanks!

@ali-hugo
Copy link

ali-hugo commented Jul 30, 2024

@jmakowski1123 I've reviewed the changes outlined in this comment and everything looks good to me from a product perspective. The changes have been implemented with a broader platform perspective in mind.

However, there is a question that I can't answer, namely:

Regarding the Gradebook, after this change (in the edx-platform), the subsection scores will have increased precision. For example, instead of 67%, it will be 66.67%. I'm not sure if this is intended, but this line indicates that these scores should indeed be rounded to two decimals. We can change it to hide the decimal points or keep it as-is to make the scores more verbose for the Instructors. What do you think?

Should the two decimal points be shown on the Gradebook, or should they be hidden?

P.s. I just noticed that this ticket's status has been set to "abandoned". Any idea why that would be?

@ali-hugo
Copy link

@itsjeyd A quick update on this:

@itsjeyd I don't think this update would require more than one reviewer (which means we wouldn't really need a coordinator). However, I would like to check this in the next Core Product meeting on 30 July.

Unfortunately, there was no time to ask about this in the Core Product Meeting yesterday as there were two presentations scheduled. That said, I'm pretty sure that my review, along with @jmakowski1123's answers to my questions here, will be sufficient.

I know this isn't really your "domain", but do you have any idea why this ticket's status may have been set to "abandoned"?

@itsjeyd itsjeyd moved this from Done to Roadmap Feature Tickets (Product) in Contributions Aug 8, 2024
@itsjeyd
Copy link

itsjeyd commented Aug 8, 2024

@ali-hugo Sorry for the delayed reply. Scrolling up on this ticket, I found this:

sometimes-a-bot-is-to-blame-and-not-the-gardener

So it looks like there is some automation in place that updates statuses on both the Contributions board and the Open edX Roadmap when someone closes an issue. Which makes sense, but unfortunately the automation doesn't seem to make any additional updates when the issue gets reopened:

and-its-lazy-too

I went ahead and reverted the status manually on the Contributions board. Couldn't do the same for the roadmap as I don't seem to have permission to change ticket statuses there.

CC @jmakowski1123

--

Unfortunately, there was no time to ask about this in the Core Product Meeting yesterday as there were two presentations scheduled. That said, I'm pretty sure that my review, along with @jmakowski1123's answers to my questions #238 (comment), will be sufficient.

Sounds good, thanks for the update!

@jmakowski1123
Copy link
Author

Thanks for the before and after data and for driving the check on this. What was the rationale for including the more precise grade for learners when they hover over the progress bar? I'm questioning whether that should be presented to learners, especially if the additional decimals are hidden from course teams in the gradebook.

@jmakowski1123 jmakowski1123 moved this from Abandoned to Being Developed in Open edX Roadmap Aug 14, 2024
@Agrendalath
Copy link
Member

@jmakowski1123,

What was the rationale for including the more precise grade for learners when they hover over the progress bar?

As you can see in the PR, we didn't explicitly change anything on the Progress page. Both the Progress and Gradebook pages were designed to show more precise grades:

Regarding the Gradebook, after this change (in the edx-platform), the subsection scores will have increased precision. For example, instead of 67%, it will be 66.67%. I'm not sure if this is intended, but this line indicates that these scores should indeed be rounded to two decimals.

We only increased the precision of the values passed to these pages.

@Agrendalath
Copy link
Member

@jmakowski1123, is there anything we should discuss before moving this forward?

cc: @ali-hugo, @itsjeyd

@jmakowski1123
Copy link
Author

Got it. I think this is ok to move forward, thanks

@itsjeyd itsjeyd added the product review complete PR has gone through product review label Sep 20, 2024
@itsjeyd
Copy link

itsjeyd commented Dec 11, 2024

@jmakowski1123 @mphilbrick211 It looks like this is done; the PR for it (openedx/edx-platform#27788) has been merged. Can we close this ticket and move it to Shipped on the roadmap?

@feanil feanil moved this from Being Developed to Shipped in Open edX Roadmap Dec 11, 2024
@feanil feanil closed this as completed by moving to Shipped in Open edX Roadmap Dec 11, 2024
@github-project-automation github-project-automation bot moved this from Shipped to Abandoned in Open edX Roadmap Dec 11, 2024
@github-project-automation github-project-automation bot moved this from Roadmap Feature Tickets (Product) to Done in Contributions Dec 11, 2024
@itsjeyd
Copy link

itsjeyd commented Dec 12, 2024

Thanks for the quick follow-up @feanil!

It looks like the bot is moving roadmap tickets from Shipped to Abandoned. Do you know why that is? Doesn't seem like the right thing to do in this context, but maybe I'm missing something.

confused-bot

CC @jmakowski1123 @mphilbrick211

@feanil feanil moved this from Abandoned to Shipped in Open edX Roadmap Dec 12, 2024
@feanil
Copy link
Contributor

feanil commented Dec 12, 2024

Looks like we curently have a workflow that when an item is closed, it gets set to abandonned.
image

I've re-marked it as shipped but it auto-closed when I marked it as shipped before and that triggered this further automation...so this was due to automation piling on top of eachother. This should be okay in this case now but I'm not sure what the right move is for the automation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
product review complete PR has gone through product review
Projects
Status: Done
Status: Shipped
Status: Needs to be re-scoped
Development

No branches or pull requests

10 participants