-
-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Implement new manual
predictor
#127
Conversation
Thanks! I'll have a look at the code in the coming days. My main request is that you actually use this in a few days, and tell share your experience here - does it work perfectly, are you sometimes struggling with it, do you wish it to behave differently but can't exactly nail it, etc? I'd really love us to achieve something that you (and hopefully others) will actually want to use, not something that you will abandon after a week because it doesn't really solve the main pain point... |
Yeah. I will keep you posted. A thing I noticed right away is that an absolute brightness reduction, say 50 (out of a total max of 255 for my machine), was too much when the brightness was already near 50 or below. So maybe the reduction should be a percentage of the current brightness value? Like when it is the same brightness of 50, it will now apply 50% reduction instead of an absolute 50 (which makes it zero, bad) resulting in 25 absolute brightness. Do you have any idea how brightness intensity is perceived by human eyes so that the configuration can be made to be the most natural? |
Looks like "Weber-Fechner Law" is one good model for this, see e.g. here on just on wiki |
Nice! I guess reductions being in percentages do a good enough job of decreasing the brightness by more when it is high and by less when it is low, right? Edit: What I feared was that the human perception might be uneven across the brightness spectrum. Like it wouldn't be a clean law like this; some ranges behave completely differently from the ones before and after. Thankfully that doesn't seem to be the case. |
I suppose you are thinking about percentages of percentages? So e.g. on my laptop Try, let's see how it feels :) This at least sounds simpler than implementing some kind of power law logarithmic function, we can go that route if percentages don't achieve great results :) |
Yes, that's correct, and more simply, just I will test, I think this will be enough. Worst case a custom curve is also an option. |
…ductions. Reduce jitter when manually changing brightness.
problematic with `brightnessctl --save set` and `brightnessctl --restore` ran from hypridle
[als.none]
[als.none.manual]
5 = 0
7 = 0
10 = 0
15 = 1
20 = 3
25 = 6
30 = 10
35 = 13
40 = 15
45 = 17
50 = 20
60 = 25
65 = 30
70 = 35
75 = 40
80 = 45
85 = 50
90 = 55
100 = 60 The current config I am using for real-life testing, after the latest commit. @maximbaz What is the upper bound for the computed luma value? I assumed it was 255 but I could only get it upto ~92 I think while displaying a solid white picture that spanned the whole screen. |
It's 0-100 like a percentage, but there could be a bug, try here, do you get 100 on white with all controls hidden? https://deadpixeltest.org Make sure you don't run some tools like dimming screen or enabling opacity of inactive windows or something like that, that could potentially affect whiteness of white. |
Getting 93 on that link too. With no controls, no black bars around the white, full window opacity, etc. |
If you have time and desire, let's debug this in a separate issue not to take focus away from your interesting contribution here :) I pushed Back to this PR, I'll go through the code once we get close to the behavior that you are satisfied with. I think it's awesome that you are continuing to experiment, and I feel like we are definitely getting somewhere. Here are some thoughts, semi-organized 😅
What do you think about all this, am I missing something perhaps? |
For now I am okay with 93 as highest luma because it is easy to work around it in the config for this PR. But if it turns out that the luma calculations are wrong in other ways too then I will definitely look at it. I created a new issue for tracking here: #129
I actually laughed out loud on this for a full minute. Nice find! I came up with it only semi-randomly.
Not sure if I understand correctly what you are saying but going back and forth between dark and bright windows the brightness value switches back and forth between the same exact two values. I monitored it like this:
Yup. |
Was your only concern inconsistent values moving back and forth? |
I feel like I had a different impression of how this is supposed to work, could you help me understand on an example?
And so it looks like after this simple trip we returned to the exact conditions on screen (luma But now that I'm re-reading your earlier messages and looking at the code, it looks like the code reduces brightness on every that event? So from |
A variable is keeping track of "pre-reduction" brightness such that at all times, the following holds true:
This is done by:
Using the previously posted config as example where
|
I seeee...! Many thanks for explaining - this is clearly a totally different algorithm that I imagined you were developing 😅 I honestly like the idea, your step-by-step scenario helped me a lot to understand exactly why it makes sense 😁 How's it going so far in the last couple of days, are you so far satisfied with how it behaves throughout the day? I'm also curious how often you find yourself adjusting the brightness? I have now many thoughts, now that I finally am on board with your idea 😅 Maybe just thoughts out loud that we don't have to act upon, but if you have some ideas I'd appreciate your feedback.
|
Thanks for the admiration.
Working perfectly. Haven't even needed to fine tune the config further. My laptop stays in the same room with windows closed (winter here in Pakistan, basically constant ambience) so have needed to adjust brightness manually much lesser than before. But as this is based on the premise that manual adjustments will be explicitly relied on by wluma, this isn't problematic anyway, I see the point of your question though 😊.
Yup, semi-randomly reduced the values bit by bit.
Maybe, but what is the logical explanation?
Will need a solid explanation first if all the config is going to be reduced to one or two constants, I am not sure what that is at this point. Even then, this gives exact control to the user in an easy to understand way, just set what reduction for which luma. We can always give magic defaults though which the user can tweak manually if they require. On second thought, I suspect even a simple linear plot will be just as effective, even an inverted curve for that matter, which is what I think you mean. Still, I am not sure, maybe depends on the exact monitor, common luma values in certain workflows etc. and even then might vary person to person. My point is the learning counterpart to this manual thing allows very nuanced predictions, this is in line with that. I mean, the user can always set a simple config like this to start and go from there, just like I did: [als.none.manual]
20 = 5
60 = 10
80 = 20 I think we need a number of willing testers with varying setups to get the best answer unless we can come up with a good explanation. Otherwise the best course of action for us in my opinion is to let users do as they desire. Ahhh, this is a never ending debate if we go down this rabbit hole. I am thinking KDE vs. Gnome.
So, some manual input / holding the hands for helping wluma learn?
Nice idea, can be discussed, can you make a new Github issue/discussion for this? |
Simply that you want whiter screen contents to reduce brightness more than darker screen contents?
Well they can simply be the parameters for the polynomial, say up to cubic one 😁 And we can provide a way to find the parameters, e.g. something like this: https://observablehq.com/@grahampullan/interactive-curve-fitting What's potentially interesting with this is that it becomes smooth, for every single luma value we can get a corresponding reduction point, so we no longer have ranges of values, and so it will be potentially less noticeable when wluma reduces or increases brightness. Not saying we have to do anything now, I'm just expressing some considerations 😁 |
Would you like to try how a smooth adjustment would feel like? I'm thinking to just hardcoding the curve that chatgpt fit on your data, it should feel like as if you extended your config for all 100 luma values. I suppose something like this?
|
In terms of code layout, we should go away from using "als" term here. I think what we should get is rename In terms of config, every Would you like to try to adjust layout? Just to say this explicitly, don't worry if you don't - the important part here is to try different ideas and find a solution we are happy with, I can shuffle the final code around afterwards, no problem 😁 |
I realize now that with polynomials you can describe anything that is possible with the current config. So my previous argument isn't useful. Curves are a superset of current config.
I think current config is easier to understand and write than any curve specified by equations which are pretty abstract to grasp right away just looking at the parameters?
No dependency on anything like this too. It would be another thing if this type of GUI was built into wluma but that is pretty out of scope.
It feels pretty smooth right now in usage. Room for more smoothness too currently with current config. Curves might be a bit more expensive to compute multiple times a second? I guess not by much? |
Regarding code layout and config keywords etc., I don't have any strong opinion on that stuff so we can do anything that feels good to you there. No issues. In fact the more I think about it, the more I feel that configuring with curves is alright too other than the points I already mentioned so we can do whatever you as the maintainer feel is the best for the project. I just want to thank you for the engaging discussions. It was really fun. I find your great attention to detail and UX very inspiring. |
And forgot to add, per display config for manual mode is a must, yes. But maybe predictor / choosing manual or auto can be global? What do you say? |
I think you had good points regarding curve, I think it's nice that it can be concise and smooth, but you are definitely right that it's much harder to read and understand and debug and make slight adjustments. Regarding the smoothness, I believe the existing predictor in
I think it will be simply easier to have predictor's-additional-data (such as points in this case) together with defining what predictor to use, rather than having two different places (global predictor + per-output link to global predictor with its additional data). Plus potentially you do want to use different predictors, e.g. the one in "main" for keyboard, if you e.g. want your keyboard to be tied to |
Yeah, this too.
That's a good idea.
Yeah you are right. I guess we are at a point now where we can decide what's left to merge this PR. Can you write up what are the final adjustments needed and we can divide the work between us or either one of us can do it all by themselves? What do you say? |
|
Well when you put it in those magic words it is somehow making sense to me suddenly 😆. You said the same exact thing with config before but I couldn't understand it then. Sorry for that. Okay then, I am renaming it to |
Awesome 😁😁 |
[als.iio]
path = "/sys/bus/iio/devices"
thresholds = { 0 = "night", 20 = "dark", 80 = "dim", 250 = "normal", 500 = "bright", 800 = "outdoors" } What is "night", "dark", "dim", "normal", "bright", and "outdoors" used for in Edit: Nevermind, I read it in the readme. So it is only used for giving learning data point fields some name? |
Yeah, to give ranges of ALS values some human-friendly name, and then additionally to associate training data with the named range (also in part because real ALS is logarithmic, so jump from 0 to 10 feels a lot more than from 10 to 100, so it's hard to use it purely as a coefficient). So when we need to predict a value, we only use training data available for this one particular range only, the current one. In other words, it's again basically different curves for different ALS, the only difference is that with "manual" predictor we define curves in advance, and in "smart" predictor you build the curves as you continue to use wluma and adjust values from time to time, every time you adjust values you are basically giving an additional point to one specific curve, making it that more precise. |
Currently hard coded it to always use |
Ready for final review. |
Many thanks! As I started reviewing, it was easier to just make some style changes right one the spot - nothing objective, just personal preferences (e.g. to match code layout to the existing controller), namings, etc. And once I had that, copy-pasting the ALS bit was trivial so I did it here as well, hope you don't mind :) I also expanded a couple of tests to clarify the intended behavior of manual controller even more. Would you like to give the final code another go, just to double check if everything is still as expected? |
manual
predictor
Yeah yeah, no issues. You know the codebase much better obviously and it is good that everything is better in line with each other now. I don't mind the name change/s either. I am very particular about these things myself in my own projects 😂 . More tests also good. So thanks.
Sure, I will soon do that. |
You did an excellent job describing the new feature in the docs. I just pushed a minor typo fix. |
And reading the diff for all your commits, I have learned some new things as a Rust beginner. |
Yup, works fantastic. I just tested. You can go ahead with the merge. |
By the way, can we connect on LinkedIn if you use it? No issue if not. I would love to know more about you and then stay in touch. Edit: I have sent you a connection request. |
So I decided to add the table that maps luma ranges to brightness reduction values under
als.none
. It made sense to think ofals.none
as also having a manual mode as well that doesn't rely on learning at all but is more, you know... manual. But underneath it just switches to a different predictor whenals.none.manual
isn't empty.I am configuring it like this for testing:
How this table is interpreted:
There is no sanity check on the new config right now.
Changing the brightness really quickly also has a bit of weirdness. Though I am not sure how to solve that or why that happens.
Also, I am a total beginner with rust so please don't hesitate to suggest any improvements as this probably does have room for improvement.
Resolves: #126