-
-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pixel-Based Effect Classes #4549
base: main
Are you sure you want to change the base?
Conversation
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
I like the base idea of only calculating pixels that actually exist, as you say, big advantage for mapped setups.
|
I didn't produce a clean version that only contains the effect changes, so it's a bit hard to say. But the FPS are indeed a bit lower than on main, which is what I would expect because of the virtual functions and no loops that can be optimized as a whole per effect. I believe it opens up more room for other optimizations though. Test: 16x16 dense 2D matrix, palette effect. This PR: Main 2d6ad41: Blurring: If it is just part of some effects, I guess it would continue working the way it did before? If it requires reading the current state of the pixels, that is already part of this PR: The effect function has x, y, and current color as inputs and outputs one new color. |
that is a significant drop in FPS, unless this can be optimized, its a no-go. look at |
Thanks. Regarding "segment workings awareness": I've had a lengthy discussion of pros and cons of the approach PixelBlaze has taken in the past (and why I chose not to go that way with WLED). Some things are easy to implement the way PixelBlaze does it, some are nearly impossible. So the best would be to extend segment API to allow both approaches. My 1st attempt was to create a If you are unaware of PixelBlaze, please Google it. It has some nice approaches (including sparse set-ups) but is unfortunately a closed source project. |
91a7e7d
to
23cafab
Compare
Here is a new version that
In any case, calling Considering @blazoncek's segment blending however, both stacking approaches should become obsolete anyway. Using these changes, I managed to get the same performance compared to the main branch I developed it on (47 FPS). This current push is version rebased to current main, so here are some numbers for that. For simplicity, I compiled both with fa2949a main WLED_DISABLE_MODE_BLEND 23cafab virtualEffectClasses WLED_DISABLE_MODE_BLEND |
According to @DedeHai the speed increase with blending approach is about 10%. As it does away with traversing bus logic. As mentioned earlier I'm afraid who will take on the effort of changing each and every effect out of about 200+ currently available. But if we put that aside my main question is "what was the driving force for this change"? If I understand implementation correctly effect no longer renders entire "canvas" in a single call but only returns a single pixel color for supplied coordinates ( I've considered PixelBlaze approach back in 2022 which takes a similar approach as this PR for some of its "patterns" but ultimately decided it would be too much hassle to modify every effect and the benefit would be minor (as it currently stands). If someone could figure out the approach to allow two different APIs to coexist (i.e. old where effect draws entire canvas in a single call and new where it has no knowledge of canvas but responds to virtual (X,Y) coordinates) that would be the best IMO.
There are plenty. Especially 2D effects that do a lot of pixel moving (i.e. shift all pixels left, etc). Some use old pixel values to evolve pixels over time. If you want to omit that then effect will need to store such information elsewhere. Making rewrite even more difficult. |
Expect another uplift for every call to getPixelColor that blending can eliminate. In my version here where I went from one call per pixel to zero calls, the improvement was ~28% in FPS.
Well that should be about the same effect API change as it is for the main effects. IIRC usermods are unmaintained though...
Don't worry about that, I'd do it if you would merge such a change. But its a bit of work, so that's why I'm asking if you want this at all before doing it.
Correct.
Well first of all it avoids some code repetitions. And it enables some features to be implemented globally, rather than once per effect. It also removes need to the check and adjust each and every single one of the effects if you make some change (like making indexes unsigned, implementing effect layering, or changing the API for setting and getting pixel colors). I'm sure you remember the last time you had to do that, but you shouldn't have to. In the long run, it would make a lot of sense to iterate physical LEDs and map segments to it rather than the other way round as it is now.
Some effects need that, yes. There are solutions to that, for example:
That's simple: Keep everything as it is in main, and add a new (full canvas) effect function that does what I do in this POC. However, it would lose pretty much all of the advantages, as it would not enable most of (any of?) the refactoring possibilities discussed in this thread.
Well that doesn't sound safe... especially when layering or transitions are involved.
Well to be perfectly honest it's just that the current state of the code base is so poor and I would like to improve it, which would also make working on it less hard. I'm an idealist, don't question my motives 😅 I listed a few advantages in the PR's description, but here are a few more :)
|
I have updated/migrated quite a few effects. I must admit I did not yet look at what your code does exactly (its a bit above my C++ understanding) but I can tell you that migrating an effect can take several hours, so you need to be aware that you are volunteering potentially hundreds of hours of tedious work in migrating and testing. If you think its worth it, also consider the particle system is now part of WLED and needs handling of this. |
There are two nice examples that may prove difficult to implement with new system: Tetrix and Bouncing Balls. |
I re-implemented bouncing balls with the new approach, and took the opportunity to imagine what such an effect would look like in a dimension-agnostic world. As for tetrix, and possibly other effects that do not iterate the pixel space and set each pixel in their main loop, but instead run their logic and then determine pixels they want to modify independently, or that store information in pixels: It's always possible to add a pixel buffer to those effects that they can work on in Pixel buffers would also have the advantage that their dimensions can be independent of the display dimensions, you can then do all sorts of transformations with that during |
7ecb4a8
to
f2a5d8e
Compare
Question regarding effect transitions or overlapping segments: How do you intend to tackle those? Assuming you can't read other segment's pixels. BTW do not forget that (currently) I see that you have moved temporal property of the effect into nextFrame() method which is clever. |
f2a5d8e
to
46660ae
Compare
Alright, here is the Tetrix effect, with its own pixel buffer. The effect code itself is almost unaltered, just moved. Go ahead and diff them! This approach should work for all effects, so this is probably as close as this proposal will get to supporting both APIs, without giving up all of the advantages.
layering: iterate physical LEDs, figure out a number of segments they belong to. Iterate them in order, and let each effect process one pixel, passing the getPixelColor result to the first effect, and the result of the effect ("current color") to the next effect. At least, that was the original pan here. with gimp-style blending: don't pass colors to the effects, but apply blending functions instead: About getPixelColor: I have thought about that quite a bit, and I believe that that is simply something that shouldn't be accessible to effects. It's lossy and they can never be sure what they get is even related to what they put there before. Instead, they should just keep everything they need in memory. Apparently there is now enough RAM for that. transitions: The same as layering, really. Ideally in the same iteration pass. If it's a transition that uses either one effect or the other, just do that during iteration, no need to execute both. If its something like blending, execute both, blend them, use that as the result. I used to have a POC for that, but I believe its not in this PR anymore. About your concerns on memory ownership and cleanup:
Well yes, separating nextFrame and getPixelColor is the fundamental idea here |
Good. It is a shame that you force-pushed, all in-line comments are lost due to that. I do believe that creating effect (base) class is the correct way forward, however I'm not so sure that coordinate based effect calls are viable for every existing effect (at least not without some cumbersome or convoluted approach taken). Consider PS effects with hundreds of particles for instance. But perhaps someone can figure out that too. I urge @DedeHai, @netmindz and @softhack007 to take a look and have a say. As for me, until all current effects are modified and implemented with this new approach (or something entirely different) I will refrain from additional comments and code examination. I will continue working on segment layering/improving effect transitions assuming segment copy will still be used. |
Crap. I didn't realize there were any.
Well, that's the reason this PR exists: Should I go and modify all current effects, or won't that be merged anyway? |
I am a bit behind with reviews. |
Let me know if you have any concerns that haven't been addressed yet |
First: I am responding mostly to the discussion - I have not yet taken the time to review the implementation details in depth. As I understand things, there are a number of semi-related changes being proposed in this PR:
Overall thoughts on those changes:
Broadly: I think the overall goals are good, though I haven't reviewed the proposed implementation yet. I strongly feel we need to merge some version of #4550 ASAP to really unlock some of the benefits of heading in this direction. |
Ignore some of the changes of this PR...
What I am trying to figure out with this PR is if you'd be interested in on overhaul of how effects work. I created a virtual base Effect class, and implemented 2 effects as implementations of that:
wled00/effects/Effect.h
wled00/effects/StaticEffect.h
wled00/effects/PaletteEffect.h
Note that this effect class does not handle a segment as a whole like current effects do, but instead handles one pixel at a time. See how this is called in wled00/FX_fcn.cpp
WS2812FX::service()
.Advantages of this approach:
service
rather than requiring changing of all effects.)Disadvantages:
service()
call instead of per pixel iteration, but c++ provides no way of enforcing that.)Outlook: currently, I only implemented fading transitions (untested). Theoretically, it would be possible to use an extended version of this Effect class to handle transitions, so that there would only have to be one code base for effects and transitions.
What do you think? Are you interested in this kind of work? Should I invest some time into advancing it?