-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Discussion about v2.0 API and architectures changes #2012
Comments
Won't have time to write more before the weekend but in a nutshell:
As you said there is a risk of scope-creep in the work to do towards v2.0. What I would suggest is you do as much as you feel. When you have something you're happy with, submit an MR and let's push it. In the spirit of "shipped is better than perfect", we could even "launch" I'll shorten this issue's name so it's easier to follow in emails. |
Hi @Zulko, if you just have the time for this one question. What of this two solutions would you prefer ?
I would probably advocate for 2 if we only have 3-5 essentials, for more I would go for 1. If 2, what would you consider to be absolutely essentials effects in the current list ? In both case, what should be the name of those methods. I have 3 suggestions:
If 2, I would personally advocate for going with B. |
Yeah, kind of unhappy with the single nature of the name too, but after using it (I had to write quite a few informal test to see if my migration of effect worked as expected), I feel like supporting a list as well as a single effect is really beneficial in terms of readability, both when calling one/multiple effect(s). Honestly, supporting both is very easy, and because I do type hinting every-time I can, it makes it ultra clear for the user and the IDEs. This is how it's done: def with_effect(self, effects: Union['Effect', List['Effect']]):
"""Return a copy of the current clip with the effects applied
>>> new_clip = clip.with_effect(vfx.Resize(0.2, method="bilinear"))
You can also pass multiple effect as a list
>>> clip.with_effect([vfx.VolumeX(0.5), vfx.Resize(0.3), vfx.Mirrorx()])
"""
if not isinstance(effects, list) :
effects = [effects]
new_clip = self.copy()
for effect in effects :
# We always copy effect before using it, see Effect.copy
# to see why we need to
effect_copy = effect.copy()
new_clip = effect_copy.apply(new_clip)
return new_clip
Yeah, I agree it seems to be the logic in the Python world (I'm coming from PHP, where construct is more frequently used for that). Anyway, I've been mostly using
Yep, it's already done ^^. Basically, all audio effects can be applied onto
I like that !!! I will finish the adding of effect shortcuts into clip as soon as I have your answer, then I will freeze further change of the API and call it "definitive" ^^. Then I will update unit tests (while keeping all ropes and stools out of hand), and finish updating the doc. And finally, I can do the MR... I would also like to add a way to tests all docs examples automatically, as well as update the docker so we can have a shared environment for testing. I hope I can do all of that in the next 1 to 2 weeks. After that I will have to wrap it up anyway. |
Regarding the core effects, I think adding a handful of effects manually is the way. Some really core effects can have their own short name so they flow well: For other effects, I would say anything that shows that it is outplace (e.g. starting with Regarding
Each of these points are light and not very important, but they add up, and in general I have mostly regretted using mixed-type inputs. For the rest, good calls 👍 |
kind of missed that but ideally we wouldn't need |
Thanks for taking the time @Zulko :)
I would have liked This way we only have one path for the user to modify a clip, through the usage of
Okay, I will go with
That would be in an ideal world indeed, but truth is: working with effect without modifying internal state make things very hard and unintuitive. Because we set effect params on effect instantiation, but clip properties only become accessible on call to apply, we frequently tend to update effect params with default value to a real value coming from clip. We could avoid doing so by only using local variables and never touch the internal, but it would result in lot of boilerplate and would probably increase usage of closure and other kind of "advanced" techniques that make things less readable (and of course writable) for the average user. Even though we could consider enforcing strict rules internally to prevent side effect when writing internal code, we really can't assume the end users will do the same when writing custom effect, even if we put it in the doc in red glowing letters. Trust me on that one, this one line of code will save us hundred of hours of fastidious debug and seemingly inexplicable issues in the long run. |
You are right regarding consistency, but core-core features can be allowed to be a bit irregular. My thinking here is that core examples will be shorter/nicer (which is petty, but counts). Side note that
I would say follow your heart, this part of the implementation doesn't affect the user experience and so can be revisited later. If the idea is to allow clips to be re-evaluated in presence of a clip, then I would suggest the following: def with_effects(effects):
new_clip = clip.copy()
for effect in effects :
updated_effect = effect.updated_by_clip(new_clip)
new_clip = updated_effect.apply( updated_effect) This way:
|
Okay, I will go with the short name then. If we want to go back to with_effects we can indeed do that later. But I really think just copying the effect will be the simplest/most natural way to achieve consistent behavior when reusing an effect. |
Do it your way 👍 |
Okay, it's done ! I will go back to documentation and probably a lot of unit tests fixing ^^. If anyone want to have a peak at the new system, you can find the current state of the code at https://github.com/OsaAjani/moviepy |
I had a look at the effects, looking good 👍 One interesting thing to try to get an idea of the new API, would be to update the examples in the example folder (maybe only the ones that are reproducible! Dusting off the 10-year-old examples might be a rabbit hole we don't want to get into now). |
Yeah I thought about the examples too, but I will not have the time to update them to the new API. The new documentation have quite more code examples along the road though. The goal would be to have something more like the panda doc. With a getting started with install, main concept and presentation if moviepy and a 10 minutes tutorial for the users who want to have a rapid understanding of the basis. Then a user guide with more in depth explanation of the different objects, etc. For the users who want to dig deeper and better understand. And finally the api reference with the doc of the function for full understanding. |
Reading along after a few days of development on my side projects testing speed effects of FFMPEG and similar video tools. Impressive work @OsaAjani, and thank you @Zulko, for driving things so strong forward. A few comments about Readability, and about Scope Creep:
@OsaAjani Do you need me to read something tomorrow morning? Some code? Documentation? |
Unable to catch up on everything posted so far but quickly checking in to say thanks, @OsaAjani, for creating a separate issue! I pinned it alongside the Future of MoviePy issue it branched off of + the Roadmap v2 issue for quick & easy access & so it catches people's attention. |
TL;DR, Doing some preview and show of clip, I think I found that
Well, there is something I just stumble upon and I'm not sure to understand. So, here is my problem: I think there is, and probably always has been, a bug in MoviePy handling of composite clip masks, but I'm not sure, maybe I just miss something. It's kind of hard to explain, probably because things are still unclear in my head, but I will try to explain as well as possible. Also it is 3PM right now and I wanted to post all of that while it was still fresh, but I kind of dropped thinking about 1 hour ago, so please excuse me if my explanations and ideas are kind of confuses all over the place... As you know I've been doing some rework of the preview/show functions to use ffplay and show. While writing some code for the intro tutorial, I encounter some unexpected behavior when using the show and preview function, more precisely when previewing a composite video clip with images that have masks, I had black background where they should have been transparent. Both for preview and show. What was strange was that it only happen on preview, on Looking more in depth to what seems to be the problem, I think I nailed it to be part of the code that I just copy/paste from how MoviePy previously compute a frame to be send to ffmpeg for writing, precisely that part that I removed (I have fix the problem, so and I invite you to go and see the changes in if clip.mask is not None:
mask = 255 * clip.mask.get_frame(t)
if mask.dtype != "uint8":
mask = mask.astype("uint8")
frame = np.dstack([frame, mask]) This part seems to basically apply the clip mask. That led me to think that composite video clip computed masks are bad. Are we maybe applying the mask twice ? Like for composite video clip In fact I think the whole generation of transparent video always had that bug (at least for v2), and that we simply never seen it because viewing transparent video is so hard (almost all tools use a black background and not a mosaic like photo previewing or simply drop transparency), and people were generating videos in formats that does not support transparency, making ffmpeg just ignore the buggy alpha channel, or with simple enough video (without compositing, or without overlapping transparent/non-transparent part of images), so that the bug was never spotted or clearly identified earlier. What is sure is, if I also remove that part from video writer, it does not change the result. So, I would need someone to look at that and tell me if I'm wrong, if transparent video generation with overlapping images such whith img with transparency under partly overlapping img with transparency does work as expected, or if I'm right and this is buggy and need to be fixed. In which case I would simply propose that we drop transparent video generation support for now. The bug is probably only visible when you have multiple clip with overlapping mask/non-mask zone, like a text behind an image with a transparent background, like in my example. If it can help, and because I will forgot this if I dont write it somewhere: I think the composite video clip just combine all individuals clips masks into one new composite video clip and set this as the mask. But in fact, the composite video clip should probably never have any mask, at least if we dont intend to support transparency for video. Not sure if the combining of individual masks into a composite video is bad logic, like this is not how to compute the final mask, or if this is bad implementation on composite video clip rendering when I think the proper logic for combining masks would be something like a simple addition of all the masks with a max at 1, like a Also, reminder for later, I think we use quite a lot of pillow to do kind of that in the Again, sorry for going in all direction with that one. |
@OsaAjani I’ll revisit this in the AM when it is not 3AM as it is now. I noticed bugs in the |
Hey guys, first a little update: I have finished writing the introduction tutorial, the getting started part seems ok, the user guide is almost done, the API doc is now fully auto-generated (using autosummary and a little customization) I still have to write the "upgrade from 1.0 to 2.0" page, and probably a few others, like adding some doc about the I'm still not sure what we should do with the gallery and examples though (for both the code is old, we probably miss some resources to remake them, and I will not have the time to update them), any thoughts ? Maybe we could remove those for now and add them back when someone have updated them. I was also wondering if we should keep |
Can you post the link to the docs to review/read? |
I haven't committed it yet, still a few things to do. I know it would be easier for you if I was to publish in a more atomic way, but I running short on time and will likely not have time to do forth and back after I realeased my changes. So I prefer trying to deliver everything as one coherent release to serve as basis, and let you and others make the necessary adjustments after/before publication. |
Nope. Whatever works best for you. |
I'm motivated to eventually dust off the gallery of examples, adapt it to the new API. No preference for CompositeVideoClip, but it would make sense to me to keep it in its own file (I prefer smaller file sizes). |
So am I— @OsaAjani let me know if you want to go this together. Not motivated by individualist efforts at this time on my side. Personal preference— here to make colleagues. You are doing excellent work. PS I have offline resources we should discuss at some point. Working orthogonal to you. |
That would be nice, we could include thoses into the getting started part of the doc. I won't have any time myself, but maybe you two can see to do it together ? On my part I haven't been able to do much these last two days, but I think I'm more or less done with the documentation, I need to update my examples to be in sync with the latest api, and I really want to add some kind of testing on the doc examples. Nothing to fancy, just making sure the script run without error. This way we can be sure the doc is up to date, and it will also serve as some sort of functional testing with almost 0 overhead. I also need to run the unit tests and fix everything that don't work (god I hate unit testing). Then it will be shipping time 🥳. After that I will probably not be able to work much on the project for at least one or two months, I'll make sure to pass a head now and then, but nothing as time consuming. |
While working on updating tests, I think I found a bug in current master, where gif written with imageio have no duration and have incorrect framerate, that should be fixed with my last commit. So if you do something related with gif in the examples, make sure to use latest commit. |
Hey @Zulko, I could use your help to re-introduce tests on iPython in this commit: OsaAjani@5cd0cac I dont understand ipython enough and I dont have the time to familiarize myself with it enough to understand what seems to go wrong (not sure but I think i've seen someone tell this unit test is already broken in current master). |
Looks good to my naive eyes. New syntax is very clear. One comment: do not
I will go through the gallery of examples tomorrow in the afternoon. |
Should we create a separate branch off |
I'm not sure we need it, as I think we are now at no more than a few days of making the PR, and I hope the review will be fairly quick. In the future though, I would advice that we keep only the master branch representing the current published version, and that we add a second branch named "dev" to made PR against and prepare the next release. This way a user landing on the github will see the current state of things in sync with pipy, and the dev will give a preview of the future. By accepting PR on dev frequently if unit tests pass, then the dev can also be used as some sort of nightly build, letting users try latest features in advance. |
I had faced the issue 2/3 months back, but just ignored it(dropped the idea), as I thought, I might be doing something wrong as I am new to programming. But I had composited a complex composite clip (as I am from Graphics background) which consisted of a mask which itself was a composite clip, consisting of images with their own masks. |
Oh yeah, I didn't mean to target this upcoming PR only, or specifically, I very much agree we should have two branches going forward (might have mentioned this in older comments as well), this PR would simply have been the one with which we could introduce this new system. I also think if we're already changing how we handle branches anyway, we should revisit an older suggestion of mine to rename the master branch to main since |
Seems like a good idea to me. |
Is there t
Did you find the solution to this Alpha channel not being used in transparent webm video? I also see black instead of the BG . Looks like Alpha masking calculation is broken. |
I have absolutely no memory of the state of the issue in the end. It's been quite a long time... You might dig in the commits to see if I posted any comment about fixing alpha channel. |
Yes, webm file has alpha, verified using ffmpeg. I gave up on Moviepy and used ffmpeg directly. :( |
Maybe this issue should be renamed / closed ? |
Making suite to @keikoro suggestion in issue #1874, I open this issue to be the place we can discuss the changes (general as well as specific implementations) we want to see for v2.0 API and architecture.
If you was part of technical discussions for v2.0 API, please come here: @Zulko @mgaitan @tburrows13 @davidbernat
The text was updated successfully, but these errors were encountered: